xref: /aosp_15_r20/external/dlmalloc/dlmalloc.c (revision 2680e0c0bdff5fc86b0432efeeb5c26d7a2d8c83)
1*2680e0c0SChristopher Ferris /*
2*2680e0c0SChristopher Ferris   This is a version (aka dlmalloc) of malloc/free/realloc written by
3*2680e0c0SChristopher Ferris   Doug Lea and released to the public domain, as explained at
4*2680e0c0SChristopher Ferris   http://creativecommons.org/publicdomain/zero/1.0/ Send questions,
5*2680e0c0SChristopher Ferris   comments, complaints, performance data, etc to [email protected]
6*2680e0c0SChristopher Ferris 
7*2680e0c0SChristopher Ferris * Version 2.8.6 Wed Aug 29 06:57:58 2012  Doug Lea
8*2680e0c0SChristopher Ferris    Note: There may be an updated version of this malloc obtainable at
9*2680e0c0SChristopher Ferris            ftp://gee.cs.oswego.edu/pub/misc/malloc.c
10*2680e0c0SChristopher Ferris          Check before installing!
11*2680e0c0SChristopher Ferris 
12*2680e0c0SChristopher Ferris * Quickstart
13*2680e0c0SChristopher Ferris 
14*2680e0c0SChristopher Ferris   This library is all in one file to simplify the most common usage:
15*2680e0c0SChristopher Ferris   ftp it, compile it (-O3), and link it into another program. All of
16*2680e0c0SChristopher Ferris   the compile-time options default to reasonable values for use on
17*2680e0c0SChristopher Ferris   most platforms.  You might later want to step through various
18*2680e0c0SChristopher Ferris   compile-time and dynamic tuning options.
19*2680e0c0SChristopher Ferris 
20*2680e0c0SChristopher Ferris   For convenience, an include file for code using this malloc is at:
21*2680e0c0SChristopher Ferris      ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.6.h
22*2680e0c0SChristopher Ferris   You don't really need this .h file unless you call functions not
23*2680e0c0SChristopher Ferris   defined in your system include files.  The .h file contains only the
24*2680e0c0SChristopher Ferris   excerpts from this file needed for using this malloc on ANSI C/C++
25*2680e0c0SChristopher Ferris   systems, so long as you haven't changed compile-time options about
26*2680e0c0SChristopher Ferris   naming and tuning parameters.  If you do, then you can create your
27*2680e0c0SChristopher Ferris   own malloc.h that does include all settings by cutting at the point
28*2680e0c0SChristopher Ferris   indicated below. Note that you may already by default be using a C
29*2680e0c0SChristopher Ferris   library containing a malloc that is based on some version of this
30*2680e0c0SChristopher Ferris   malloc (for example in linux). You might still want to use the one
31*2680e0c0SChristopher Ferris   in this file to customize settings or to avoid overheads associated
32*2680e0c0SChristopher Ferris   with library versions.
33*2680e0c0SChristopher Ferris 
34*2680e0c0SChristopher Ferris * Vital statistics:
35*2680e0c0SChristopher Ferris 
36*2680e0c0SChristopher Ferris   Supported pointer/size_t representation:       4 or 8 bytes
37*2680e0c0SChristopher Ferris        size_t MUST be an unsigned type of the same width as
38*2680e0c0SChristopher Ferris        pointers. (If you are using an ancient system that declares
39*2680e0c0SChristopher Ferris        size_t as a signed type, or need it to be a different width
40*2680e0c0SChristopher Ferris        than pointers, you can use a previous release of this malloc
41*2680e0c0SChristopher Ferris        (e.g. 2.7.2) supporting these.)
42*2680e0c0SChristopher Ferris 
43*2680e0c0SChristopher Ferris   Alignment:                                     8 bytes (minimum)
44*2680e0c0SChristopher Ferris        This suffices for nearly all current machines and C compilers.
45*2680e0c0SChristopher Ferris        However, you can define MALLOC_ALIGNMENT to be wider than this
46*2680e0c0SChristopher Ferris        if necessary (up to 128bytes), at the expense of using more space.
47*2680e0c0SChristopher Ferris 
48*2680e0c0SChristopher Ferris   Minimum overhead per allocated chunk:   4 or  8 bytes (if 4byte sizes)
49*2680e0c0SChristopher Ferris                                           8 or 16 bytes (if 8byte sizes)
50*2680e0c0SChristopher Ferris        Each malloced chunk has a hidden word of overhead holding size
51*2680e0c0SChristopher Ferris        and status information, and additional cross-check word
52*2680e0c0SChristopher Ferris        if FOOTERS is defined.
53*2680e0c0SChristopher Ferris 
54*2680e0c0SChristopher Ferris   Minimum allocated size: 4-byte ptrs:  16 bytes    (including overhead)
55*2680e0c0SChristopher Ferris                           8-byte ptrs:  32 bytes    (including overhead)
56*2680e0c0SChristopher Ferris 
57*2680e0c0SChristopher Ferris        Even a request for zero bytes (i.e., malloc(0)) returns a
58*2680e0c0SChristopher Ferris        pointer to something of the minimum allocatable size.
59*2680e0c0SChristopher Ferris        The maximum overhead wastage (i.e., number of extra bytes
60*2680e0c0SChristopher Ferris        allocated than were requested in malloc) is less than or equal
61*2680e0c0SChristopher Ferris        to the minimum size, except for requests >= mmap_threshold that
62*2680e0c0SChristopher Ferris        are serviced via mmap(), where the worst case wastage is about
63*2680e0c0SChristopher Ferris        32 bytes plus the remainder from a system page (the minimal
64*2680e0c0SChristopher Ferris        mmap unit); typically 4096 or 8192 bytes.
65*2680e0c0SChristopher Ferris 
66*2680e0c0SChristopher Ferris   Security: static-safe; optionally more or less
67*2680e0c0SChristopher Ferris        The "security" of malloc refers to the ability of malicious
68*2680e0c0SChristopher Ferris        code to accentuate the effects of errors (for example, freeing
69*2680e0c0SChristopher Ferris        space that is not currently malloc'ed or overwriting past the
70*2680e0c0SChristopher Ferris        ends of chunks) in code that calls malloc.  This malloc
71*2680e0c0SChristopher Ferris        guarantees not to modify any memory locations below the base of
72*2680e0c0SChristopher Ferris        heap, i.e., static variables, even in the presence of usage
73*2680e0c0SChristopher Ferris        errors.  The routines additionally detect most improper frees
74*2680e0c0SChristopher Ferris        and reallocs.  All this holds as long as the static bookkeeping
75*2680e0c0SChristopher Ferris        for malloc itself is not corrupted by some other means.  This
76*2680e0c0SChristopher Ferris        is only one aspect of security -- these checks do not, and
77*2680e0c0SChristopher Ferris        cannot, detect all possible programming errors.
78*2680e0c0SChristopher Ferris 
79*2680e0c0SChristopher Ferris        If FOOTERS is defined nonzero, then each allocated chunk
80*2680e0c0SChristopher Ferris        carries an additional check word to verify that it was malloced
81*2680e0c0SChristopher Ferris        from its space.  These check words are the same within each
82*2680e0c0SChristopher Ferris        execution of a program using malloc, but differ across
83*2680e0c0SChristopher Ferris        executions, so externally crafted fake chunks cannot be
84*2680e0c0SChristopher Ferris        freed. This improves security by rejecting frees/reallocs that
85*2680e0c0SChristopher Ferris        could corrupt heap memory, in addition to the checks preventing
86*2680e0c0SChristopher Ferris        writes to statics that are always on.  This may further improve
87*2680e0c0SChristopher Ferris        security at the expense of time and space overhead.  (Note that
88*2680e0c0SChristopher Ferris        FOOTERS may also be worth using with MSPACES.)
89*2680e0c0SChristopher Ferris 
90*2680e0c0SChristopher Ferris        By default detected errors cause the program to abort (calling
91*2680e0c0SChristopher Ferris        "abort()"). You can override this to instead proceed past
92*2680e0c0SChristopher Ferris        errors by defining PROCEED_ON_ERROR.  In this case, a bad free
93*2680e0c0SChristopher Ferris        has no effect, and a malloc that encounters a bad address
94*2680e0c0SChristopher Ferris        caused by user overwrites will ignore the bad address by
95*2680e0c0SChristopher Ferris        dropping pointers and indices to all known memory. This may
96*2680e0c0SChristopher Ferris        be appropriate for programs that should continue if at all
97*2680e0c0SChristopher Ferris        possible in the face of programming errors, although they may
98*2680e0c0SChristopher Ferris        run out of memory because dropped memory is never reclaimed.
99*2680e0c0SChristopher Ferris 
100*2680e0c0SChristopher Ferris        If you don't like either of these options, you can define
101*2680e0c0SChristopher Ferris        CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
102*2680e0c0SChristopher Ferris        else. And if if you are sure that your program using malloc has
103*2680e0c0SChristopher Ferris        no errors or vulnerabilities, you can define INSECURE to 1,
104*2680e0c0SChristopher Ferris        which might (or might not) provide a small performance improvement.
105*2680e0c0SChristopher Ferris 
106*2680e0c0SChristopher Ferris        It is also possible to limit the maximum total allocatable
107*2680e0c0SChristopher Ferris        space, using malloc_set_footprint_limit. This is not
108*2680e0c0SChristopher Ferris        designed as a security feature in itself (calls to set limits
109*2680e0c0SChristopher Ferris        are not screened or privileged), but may be useful as one
110*2680e0c0SChristopher Ferris        aspect of a secure implementation.
111*2680e0c0SChristopher Ferris 
112*2680e0c0SChristopher Ferris   Thread-safety: NOT thread-safe unless USE_LOCKS defined non-zero
113*2680e0c0SChristopher Ferris        When USE_LOCKS is defined, each public call to malloc, free,
114*2680e0c0SChristopher Ferris        etc is surrounded with a lock. By default, this uses a plain
115*2680e0c0SChristopher Ferris        pthread mutex, win32 critical section, or a spin-lock if if
116*2680e0c0SChristopher Ferris        available for the platform and not disabled by setting
117*2680e0c0SChristopher Ferris        USE_SPIN_LOCKS=0.  However, if USE_RECURSIVE_LOCKS is defined,
118*2680e0c0SChristopher Ferris        recursive versions are used instead (which are not required for
119*2680e0c0SChristopher Ferris        base functionality but may be needed in layered extensions).
120*2680e0c0SChristopher Ferris        Using a global lock is not especially fast, and can be a major
121*2680e0c0SChristopher Ferris        bottleneck.  It is designed only to provide minimal protection
122*2680e0c0SChristopher Ferris        in concurrent environments, and to provide a basis for
123*2680e0c0SChristopher Ferris        extensions.  If you are using malloc in a concurrent program,
124*2680e0c0SChristopher Ferris        consider instead using nedmalloc
125*2680e0c0SChristopher Ferris        (http://www.nedprod.com/programs/portable/nedmalloc/) or
126*2680e0c0SChristopher Ferris        ptmalloc (See http://www.malloc.de), which are derived from
127*2680e0c0SChristopher Ferris        versions of this malloc.
128*2680e0c0SChristopher Ferris 
129*2680e0c0SChristopher Ferris   System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
130*2680e0c0SChristopher Ferris        This malloc can use unix sbrk or any emulation (invoked using
131*2680e0c0SChristopher Ferris        the CALL_MORECORE macro) and/or mmap/munmap or any emulation
132*2680e0c0SChristopher Ferris        (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
133*2680e0c0SChristopher Ferris        memory.  On most unix systems, it tends to work best if both
134*2680e0c0SChristopher Ferris        MORECORE and MMAP are enabled.  On Win32, it uses emulations
135*2680e0c0SChristopher Ferris        based on VirtualAlloc. It also uses common C library functions
136*2680e0c0SChristopher Ferris        like memset.
137*2680e0c0SChristopher Ferris 
138*2680e0c0SChristopher Ferris   Compliance: I believe it is compliant with the Single Unix Specification
139*2680e0c0SChristopher Ferris        (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
140*2680e0c0SChristopher Ferris        others as well.
141*2680e0c0SChristopher Ferris 
142*2680e0c0SChristopher Ferris * Overview of algorithms
143*2680e0c0SChristopher Ferris 
144*2680e0c0SChristopher Ferris   This is not the fastest, most space-conserving, most portable, or
145*2680e0c0SChristopher Ferris   most tunable malloc ever written. However it is among the fastest
146*2680e0c0SChristopher Ferris   while also being among the most space-conserving, portable and
147*2680e0c0SChristopher Ferris   tunable.  Consistent balance across these factors results in a good
148*2680e0c0SChristopher Ferris   general-purpose allocator for malloc-intensive programs.
149*2680e0c0SChristopher Ferris 
150*2680e0c0SChristopher Ferris   In most ways, this malloc is a best-fit allocator. Generally, it
151*2680e0c0SChristopher Ferris   chooses the best-fitting existing chunk for a request, with ties
152*2680e0c0SChristopher Ferris   broken in approximately least-recently-used order. (This strategy
153*2680e0c0SChristopher Ferris   normally maintains low fragmentation.) However, for requests less
154*2680e0c0SChristopher Ferris   than 256bytes, it deviates from best-fit when there is not an
155*2680e0c0SChristopher Ferris   exactly fitting available chunk by preferring to use space adjacent
156*2680e0c0SChristopher Ferris   to that used for the previous small request, as well as by breaking
157*2680e0c0SChristopher Ferris   ties in approximately most-recently-used order. (These enhance
158*2680e0c0SChristopher Ferris   locality of series of small allocations.)  And for very large requests
159*2680e0c0SChristopher Ferris   (>= 256Kb by default), it relies on system memory mapping
160*2680e0c0SChristopher Ferris   facilities, if supported.  (This helps avoid carrying around and
161*2680e0c0SChristopher Ferris   possibly fragmenting memory used only for large chunks.)
162*2680e0c0SChristopher Ferris 
163*2680e0c0SChristopher Ferris   All operations (except malloc_stats and mallinfo) have execution
164*2680e0c0SChristopher Ferris   times that are bounded by a constant factor of the number of bits in
165*2680e0c0SChristopher Ferris   a size_t, not counting any clearing in calloc or copying in realloc,
166*2680e0c0SChristopher Ferris   or actions surrounding MORECORE and MMAP that have times
167*2680e0c0SChristopher Ferris   proportional to the number of non-contiguous regions returned by
168*2680e0c0SChristopher Ferris   system allocation routines, which is often just 1. In real-time
169*2680e0c0SChristopher Ferris   applications, you can optionally suppress segment traversals using
170*2680e0c0SChristopher Ferris   NO_SEGMENT_TRAVERSAL, which assures bounded execution even when
171*2680e0c0SChristopher Ferris   system allocators return non-contiguous spaces, at the typical
172*2680e0c0SChristopher Ferris   expense of carrying around more memory and increased fragmentation.
173*2680e0c0SChristopher Ferris 
174*2680e0c0SChristopher Ferris   The implementation is not very modular and seriously overuses
175*2680e0c0SChristopher Ferris   macros. Perhaps someday all C compilers will do as good a job
176*2680e0c0SChristopher Ferris   inlining modular code as can now be done by brute-force expansion,
177*2680e0c0SChristopher Ferris   but now, enough of them seem not to.
178*2680e0c0SChristopher Ferris 
179*2680e0c0SChristopher Ferris   Some compilers issue a lot of warnings about code that is
180*2680e0c0SChristopher Ferris   dead/unreachable only on some platforms, and also about intentional
181*2680e0c0SChristopher Ferris   uses of negation on unsigned types. All known cases of each can be
182*2680e0c0SChristopher Ferris   ignored.
183*2680e0c0SChristopher Ferris 
184*2680e0c0SChristopher Ferris   For a longer but out of date high-level description, see
185*2680e0c0SChristopher Ferris      http://gee.cs.oswego.edu/dl/html/malloc.html
186*2680e0c0SChristopher Ferris 
187*2680e0c0SChristopher Ferris * MSPACES
188*2680e0c0SChristopher Ferris   If MSPACES is defined, then in addition to malloc, free, etc.,
189*2680e0c0SChristopher Ferris   this file also defines mspace_malloc, mspace_free, etc. These
190*2680e0c0SChristopher Ferris   are versions of malloc routines that take an "mspace" argument
191*2680e0c0SChristopher Ferris   obtained using create_mspace, to control all internal bookkeeping.
192*2680e0c0SChristopher Ferris   If ONLY_MSPACES is defined, only these versions are compiled.
193*2680e0c0SChristopher Ferris   So if you would like to use this allocator for only some allocations,
194*2680e0c0SChristopher Ferris   and your system malloc for others, you can compile with
195*2680e0c0SChristopher Ferris   ONLY_MSPACES and then do something like...
196*2680e0c0SChristopher Ferris     static mspace mymspace = create_mspace(0,0); // for example
197*2680e0c0SChristopher Ferris     #define mymalloc(bytes)  mspace_malloc(mymspace, bytes)
198*2680e0c0SChristopher Ferris 
199*2680e0c0SChristopher Ferris   (Note: If you only need one instance of an mspace, you can instead
200*2680e0c0SChristopher Ferris   use "USE_DL_PREFIX" to relabel the global malloc.)
201*2680e0c0SChristopher Ferris 
202*2680e0c0SChristopher Ferris   You can similarly create thread-local allocators by storing
203*2680e0c0SChristopher Ferris   mspaces as thread-locals. For example:
204*2680e0c0SChristopher Ferris     static __thread mspace tlms = 0;
205*2680e0c0SChristopher Ferris     void*  tlmalloc(size_t bytes) {
206*2680e0c0SChristopher Ferris       if (tlms == 0) tlms = create_mspace(0, 0);
207*2680e0c0SChristopher Ferris       return mspace_malloc(tlms, bytes);
208*2680e0c0SChristopher Ferris     }
209*2680e0c0SChristopher Ferris     void  tlfree(void* mem) { mspace_free(tlms, mem); }
210*2680e0c0SChristopher Ferris 
211*2680e0c0SChristopher Ferris   Unless FOOTERS is defined, each mspace is completely independent.
212*2680e0c0SChristopher Ferris   You cannot allocate from one and free to another (although
213*2680e0c0SChristopher Ferris   conformance is only weakly checked, so usage errors are not always
214*2680e0c0SChristopher Ferris   caught). If FOOTERS is defined, then each chunk carries around a tag
215*2680e0c0SChristopher Ferris   indicating its originating mspace, and frees are directed to their
216*2680e0c0SChristopher Ferris   originating spaces. Normally, this requires use of locks.
217*2680e0c0SChristopher Ferris 
218*2680e0c0SChristopher Ferris  -------------------------  Compile-time options ---------------------------
219*2680e0c0SChristopher Ferris 
220*2680e0c0SChristopher Ferris Be careful in setting #define values for numerical constants of type
221*2680e0c0SChristopher Ferris size_t. On some systems, literal values are not automatically extended
222*2680e0c0SChristopher Ferris to size_t precision unless they are explicitly casted. You can also
223*2680e0c0SChristopher Ferris use the symbolic values MAX_SIZE_T, SIZE_T_ONE, etc below.
224*2680e0c0SChristopher Ferris 
225*2680e0c0SChristopher Ferris WIN32                    default: defined if _WIN32 defined
226*2680e0c0SChristopher Ferris   Defining WIN32 sets up defaults for MS environment and compilers.
227*2680e0c0SChristopher Ferris   Otherwise defaults are for unix. Beware that there seem to be some
228*2680e0c0SChristopher Ferris   cases where this malloc might not be a pure drop-in replacement for
229*2680e0c0SChristopher Ferris   Win32 malloc: Random-looking failures from Win32 GDI API's (eg;
230*2680e0c0SChristopher Ferris   SetDIBits()) may be due to bugs in some video driver implementations
231*2680e0c0SChristopher Ferris   when pixel buffers are malloc()ed, and the region spans more than
232*2680e0c0SChristopher Ferris   one VirtualAlloc()ed region. Because dlmalloc uses a small (64Kb)
233*2680e0c0SChristopher Ferris   default granularity, pixel buffers may straddle virtual allocation
234*2680e0c0SChristopher Ferris   regions more often than when using the Microsoft allocator.  You can
235*2680e0c0SChristopher Ferris   avoid this by using VirtualAlloc() and VirtualFree() for all pixel
236*2680e0c0SChristopher Ferris   buffers rather than using malloc().  If this is not possible,
237*2680e0c0SChristopher Ferris   recompile this malloc with a larger DEFAULT_GRANULARITY. Note:
238*2680e0c0SChristopher Ferris   in cases where MSC and gcc (cygwin) are known to differ on WIN32,
239*2680e0c0SChristopher Ferris   conditions use _MSC_VER to distinguish them.
240*2680e0c0SChristopher Ferris 
241*2680e0c0SChristopher Ferris DLMALLOC_EXPORT       default: extern
242*2680e0c0SChristopher Ferris   Defines how public APIs are declared. If you want to export via a
243*2680e0c0SChristopher Ferris   Windows DLL, you might define this as
244*2680e0c0SChristopher Ferris     #define DLMALLOC_EXPORT extern  __declspec(dllexport)
245*2680e0c0SChristopher Ferris   If you want a POSIX ELF shared object, you might use
246*2680e0c0SChristopher Ferris     #define DLMALLOC_EXPORT extern __attribute__((visibility("default")))
247*2680e0c0SChristopher Ferris 
248*2680e0c0SChristopher Ferris MALLOC_ALIGNMENT         default: (size_t)(2 * sizeof(void *))
249*2680e0c0SChristopher Ferris   Controls the minimum alignment for malloc'ed chunks.  It must be a
250*2680e0c0SChristopher Ferris   power of two and at least 8, even on machines for which smaller
251*2680e0c0SChristopher Ferris   alignments would suffice. It may be defined as larger than this
252*2680e0c0SChristopher Ferris   though. Note however that code and data structures are optimized for
253*2680e0c0SChristopher Ferris   the case of 8-byte alignment.
254*2680e0c0SChristopher Ferris 
255*2680e0c0SChristopher Ferris MSPACES                  default: 0 (false)
256*2680e0c0SChristopher Ferris   If true, compile in support for independent allocation spaces.
257*2680e0c0SChristopher Ferris   This is only supported if HAVE_MMAP is true.
258*2680e0c0SChristopher Ferris 
259*2680e0c0SChristopher Ferris ONLY_MSPACES             default: 0 (false)
260*2680e0c0SChristopher Ferris   If true, only compile in mspace versions, not regular versions.
261*2680e0c0SChristopher Ferris 
262*2680e0c0SChristopher Ferris USE_LOCKS                default: 0 (false)
263*2680e0c0SChristopher Ferris   Causes each call to each public routine to be surrounded with
264*2680e0c0SChristopher Ferris   pthread or WIN32 mutex lock/unlock. (If set true, this can be
265*2680e0c0SChristopher Ferris   overridden on a per-mspace basis for mspace versions.) If set to a
266*2680e0c0SChristopher Ferris   non-zero value other than 1, locks are used, but their
267*2680e0c0SChristopher Ferris   implementation is left out, so lock functions must be supplied manually,
268*2680e0c0SChristopher Ferris   as described below.
269*2680e0c0SChristopher Ferris 
270*2680e0c0SChristopher Ferris USE_SPIN_LOCKS           default: 1 iff USE_LOCKS and spin locks available
271*2680e0c0SChristopher Ferris   If true, uses custom spin locks for locking. This is currently
272*2680e0c0SChristopher Ferris   supported only gcc >= 4.1, older gccs on x86 platforms, and recent
273*2680e0c0SChristopher Ferris   MS compilers.  Otherwise, posix locks or win32 critical sections are
274*2680e0c0SChristopher Ferris   used.
275*2680e0c0SChristopher Ferris 
276*2680e0c0SChristopher Ferris USE_RECURSIVE_LOCKS      default: not defined
277*2680e0c0SChristopher Ferris   If defined nonzero, uses recursive (aka reentrant) locks, otherwise
278*2680e0c0SChristopher Ferris   uses plain mutexes. This is not required for malloc proper, but may
279*2680e0c0SChristopher Ferris   be needed for layered allocators such as nedmalloc.
280*2680e0c0SChristopher Ferris 
281*2680e0c0SChristopher Ferris LOCK_AT_FORK            default: not defined
282*2680e0c0SChristopher Ferris   If defined nonzero, performs pthread_atfork upon initialization
283*2680e0c0SChristopher Ferris   to initialize child lock while holding parent lock. The implementation
284*2680e0c0SChristopher Ferris   assumes that pthread locks (not custom locks) are being used. In other
285*2680e0c0SChristopher Ferris   cases, you may need to customize the implementation.
286*2680e0c0SChristopher Ferris 
287*2680e0c0SChristopher Ferris FOOTERS                  default: 0
288*2680e0c0SChristopher Ferris   If true, provide extra checking and dispatching by placing
289*2680e0c0SChristopher Ferris   information in the footers of allocated chunks. This adds
290*2680e0c0SChristopher Ferris   space and time overhead.
291*2680e0c0SChristopher Ferris 
292*2680e0c0SChristopher Ferris INSECURE                 default: 0
293*2680e0c0SChristopher Ferris   If true, omit checks for usage errors and heap space overwrites.
294*2680e0c0SChristopher Ferris 
295*2680e0c0SChristopher Ferris USE_DL_PREFIX            default: NOT defined
296*2680e0c0SChristopher Ferris   Causes compiler to prefix all public routines with the string 'dl'.
297*2680e0c0SChristopher Ferris   This can be useful when you only want to use this malloc in one part
298*2680e0c0SChristopher Ferris   of a program, using your regular system malloc elsewhere.
299*2680e0c0SChristopher Ferris 
300*2680e0c0SChristopher Ferris MALLOC_INSPECT_ALL       default: NOT defined
301*2680e0c0SChristopher Ferris   If defined, compiles malloc_inspect_all and mspace_inspect_all, that
302*2680e0c0SChristopher Ferris   perform traversal of all heap space.  Unless access to these
303*2680e0c0SChristopher Ferris   functions is otherwise restricted, you probably do not want to
304*2680e0c0SChristopher Ferris   include them in secure implementations.
305*2680e0c0SChristopher Ferris 
306*2680e0c0SChristopher Ferris ABORT                    default: defined as abort()
307*2680e0c0SChristopher Ferris   Defines how to abort on failed checks.  On most systems, a failed
308*2680e0c0SChristopher Ferris   check cannot die with an "assert" or even print an informative
309*2680e0c0SChristopher Ferris   message, because the underlying print routines in turn call malloc,
310*2680e0c0SChristopher Ferris   which will fail again.  Generally, the best policy is to simply call
311*2680e0c0SChristopher Ferris   abort(). It's not very useful to do more than this because many
312*2680e0c0SChristopher Ferris   errors due to overwriting will show up as address faults (null, odd
313*2680e0c0SChristopher Ferris   addresses etc) rather than malloc-triggered checks, so will also
314*2680e0c0SChristopher Ferris   abort.  Also, most compilers know that abort() does not return, so
315*2680e0c0SChristopher Ferris   can better optimize code conditionally calling it.
316*2680e0c0SChristopher Ferris 
317*2680e0c0SChristopher Ferris PROCEED_ON_ERROR           default: defined as 0 (false)
318*2680e0c0SChristopher Ferris   Controls whether detected bad addresses cause them to bypassed
319*2680e0c0SChristopher Ferris   rather than aborting. If set, detected bad arguments to free and
320*2680e0c0SChristopher Ferris   realloc are ignored. And all bookkeeping information is zeroed out
321*2680e0c0SChristopher Ferris   upon a detected overwrite of freed heap space, thus losing the
322*2680e0c0SChristopher Ferris   ability to ever return it from malloc again, but enabling the
323*2680e0c0SChristopher Ferris   application to proceed. If PROCEED_ON_ERROR is defined, the
324*2680e0c0SChristopher Ferris   static variable malloc_corruption_error_count is compiled in
325*2680e0c0SChristopher Ferris   and can be examined to see if errors have occurred. This option
326*2680e0c0SChristopher Ferris   generates slower code than the default abort policy.
327*2680e0c0SChristopher Ferris 
328*2680e0c0SChristopher Ferris DEBUG                    default: NOT defined
329*2680e0c0SChristopher Ferris   The DEBUG setting is mainly intended for people trying to modify
330*2680e0c0SChristopher Ferris   this code or diagnose problems when porting to new platforms.
331*2680e0c0SChristopher Ferris   However, it may also be able to better isolate user errors than just
332*2680e0c0SChristopher Ferris   using runtime checks.  The assertions in the check routines spell
333*2680e0c0SChristopher Ferris   out in more detail the assumptions and invariants underlying the
334*2680e0c0SChristopher Ferris   algorithms.  The checking is fairly extensive, and will slow down
335*2680e0c0SChristopher Ferris   execution noticeably. Calling malloc_stats or mallinfo with DEBUG
336*2680e0c0SChristopher Ferris   set will attempt to check every non-mmapped allocated and free chunk
337*2680e0c0SChristopher Ferris   in the course of computing the summaries.
338*2680e0c0SChristopher Ferris 
339*2680e0c0SChristopher Ferris ABORT_ON_ASSERT_FAILURE   default: defined as 1 (true)
340*2680e0c0SChristopher Ferris   Debugging assertion failures can be nearly impossible if your
341*2680e0c0SChristopher Ferris   version of the assert macro causes malloc to be called, which will
342*2680e0c0SChristopher Ferris   lead to a cascade of further failures, blowing the runtime stack.
343*2680e0c0SChristopher Ferris   ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
344*2680e0c0SChristopher Ferris   which will usually make debugging easier.
345*2680e0c0SChristopher Ferris 
346*2680e0c0SChristopher Ferris MALLOC_FAILURE_ACTION     default: sets errno to ENOMEM, or no-op on win32
347*2680e0c0SChristopher Ferris   The action to take before "return 0" when malloc fails to be able to
348*2680e0c0SChristopher Ferris   return memory because there is none available.
349*2680e0c0SChristopher Ferris 
350*2680e0c0SChristopher Ferris HAVE_MORECORE             default: 1 (true) unless win32 or ONLY_MSPACES
351*2680e0c0SChristopher Ferris   True if this system supports sbrk or an emulation of it.
352*2680e0c0SChristopher Ferris 
353*2680e0c0SChristopher Ferris MORECORE                  default: sbrk
354*2680e0c0SChristopher Ferris   The name of the sbrk-style system routine to call to obtain more
355*2680e0c0SChristopher Ferris   memory.  See below for guidance on writing custom MORECORE
356*2680e0c0SChristopher Ferris   functions. The type of the argument to sbrk/MORECORE varies across
357*2680e0c0SChristopher Ferris   systems.  It cannot be size_t, because it supports negative
358*2680e0c0SChristopher Ferris   arguments, so it is normally the signed type of the same width as
359*2680e0c0SChristopher Ferris   size_t (sometimes declared as "intptr_t").  It doesn't much matter
360*2680e0c0SChristopher Ferris   though. Internally, we only call it with arguments less than half
361*2680e0c0SChristopher Ferris   the max value of a size_t, which should work across all reasonable
362*2680e0c0SChristopher Ferris   possibilities, although sometimes generating compiler warnings.
363*2680e0c0SChristopher Ferris 
364*2680e0c0SChristopher Ferris MORECORE_CONTIGUOUS       default: 1 (true) if HAVE_MORECORE
365*2680e0c0SChristopher Ferris   If true, take advantage of fact that consecutive calls to MORECORE
366*2680e0c0SChristopher Ferris   with positive arguments always return contiguous increasing
367*2680e0c0SChristopher Ferris   addresses.  This is true of unix sbrk. It does not hurt too much to
368*2680e0c0SChristopher Ferris   set it true anyway, since malloc copes with non-contiguities.
369*2680e0c0SChristopher Ferris   Setting it false when definitely non-contiguous saves time
370*2680e0c0SChristopher Ferris   and possibly wasted space it would take to discover this though.
371*2680e0c0SChristopher Ferris 
372*2680e0c0SChristopher Ferris MORECORE_CANNOT_TRIM      default: NOT defined
373*2680e0c0SChristopher Ferris   True if MORECORE cannot release space back to the system when given
374*2680e0c0SChristopher Ferris   negative arguments. This is generally necessary only if you are
375*2680e0c0SChristopher Ferris   using a hand-crafted MORECORE function that cannot handle negative
376*2680e0c0SChristopher Ferris   arguments.
377*2680e0c0SChristopher Ferris 
378*2680e0c0SChristopher Ferris NO_SEGMENT_TRAVERSAL       default: 0
379*2680e0c0SChristopher Ferris   If non-zero, suppresses traversals of memory segments
380*2680e0c0SChristopher Ferris   returned by either MORECORE or CALL_MMAP. This disables
381*2680e0c0SChristopher Ferris   merging of segments that are contiguous, and selectively
382*2680e0c0SChristopher Ferris   releasing them to the OS if unused, but bounds execution times.
383*2680e0c0SChristopher Ferris 
384*2680e0c0SChristopher Ferris HAVE_MMAP                 default: 1 (true)
385*2680e0c0SChristopher Ferris   True if this system supports mmap or an emulation of it.  If so, and
386*2680e0c0SChristopher Ferris   HAVE_MORECORE is not true, MMAP is used for all system
387*2680e0c0SChristopher Ferris   allocation. If set and HAVE_MORECORE is true as well, MMAP is
388*2680e0c0SChristopher Ferris   primarily used to directly allocate very large blocks. It is also
389*2680e0c0SChristopher Ferris   used as a backup strategy in cases where MORECORE fails to provide
390*2680e0c0SChristopher Ferris   space from system. Note: A single call to MUNMAP is assumed to be
391*2680e0c0SChristopher Ferris   able to unmap memory that may have be allocated using multiple calls
392*2680e0c0SChristopher Ferris   to MMAP, so long as they are adjacent.
393*2680e0c0SChristopher Ferris 
394*2680e0c0SChristopher Ferris HAVE_MREMAP               default: 1 on linux, else 0
395*2680e0c0SChristopher Ferris   If true realloc() uses mremap() to re-allocate large blocks and
396*2680e0c0SChristopher Ferris   extend or shrink allocation spaces.
397*2680e0c0SChristopher Ferris 
398*2680e0c0SChristopher Ferris MMAP_CLEARS               default: 1 except on WINCE.
399*2680e0c0SChristopher Ferris   True if mmap clears memory so calloc doesn't need to. This is true
400*2680e0c0SChristopher Ferris   for standard unix mmap using /dev/zero and on WIN32 except for WINCE.
401*2680e0c0SChristopher Ferris 
402*2680e0c0SChristopher Ferris USE_BUILTIN_FFS            default: 0 (i.e., not used)
403*2680e0c0SChristopher Ferris   Causes malloc to use the builtin ffs() function to compute indices.
404*2680e0c0SChristopher Ferris   Some compilers may recognize and intrinsify ffs to be faster than the
405*2680e0c0SChristopher Ferris   supplied C version. Also, the case of x86 using gcc is special-cased
406*2680e0c0SChristopher Ferris   to an asm instruction, so is already as fast as it can be, and so
407*2680e0c0SChristopher Ferris   this setting has no effect. Similarly for Win32 under recent MS compilers.
408*2680e0c0SChristopher Ferris   (On most x86s, the asm version is only slightly faster than the C version.)
409*2680e0c0SChristopher Ferris 
410*2680e0c0SChristopher Ferris malloc_getpagesize         default: derive from system includes, or 4096.
411*2680e0c0SChristopher Ferris   The system page size. To the extent possible, this malloc manages
412*2680e0c0SChristopher Ferris   memory from the system in page-size units.  This may be (and
413*2680e0c0SChristopher Ferris   usually is) a function rather than a constant. This is ignored
414*2680e0c0SChristopher Ferris   if WIN32, where page size is determined using getSystemInfo during
415*2680e0c0SChristopher Ferris   initialization.
416*2680e0c0SChristopher Ferris 
417*2680e0c0SChristopher Ferris USE_DEV_RANDOM             default: 0 (i.e., not used)
418*2680e0c0SChristopher Ferris   Causes malloc to use /dev/random to initialize secure magic seed for
419*2680e0c0SChristopher Ferris   stamping footers. Otherwise, the current time is used.
420*2680e0c0SChristopher Ferris 
421*2680e0c0SChristopher Ferris NO_MALLINFO                default: 0
422*2680e0c0SChristopher Ferris   If defined, don't compile "mallinfo". This can be a simple way
423*2680e0c0SChristopher Ferris   of dealing with mismatches between system declarations and
424*2680e0c0SChristopher Ferris   those in this file.
425*2680e0c0SChristopher Ferris 
426*2680e0c0SChristopher Ferris MALLINFO_FIELD_TYPE        default: size_t
427*2680e0c0SChristopher Ferris   The type of the fields in the mallinfo struct. This was originally
428*2680e0c0SChristopher Ferris   defined as "int" in SVID etc, but is more usefully defined as
429*2680e0c0SChristopher Ferris   size_t. The value is used only if  HAVE_USR_INCLUDE_MALLOC_H is not set
430*2680e0c0SChristopher Ferris 
431*2680e0c0SChristopher Ferris NO_MALLOC_STATS            default: 0
432*2680e0c0SChristopher Ferris   If defined, don't compile "malloc_stats". This avoids calls to
433*2680e0c0SChristopher Ferris   fprintf and bringing in stdio dependencies you might not want.
434*2680e0c0SChristopher Ferris 
435*2680e0c0SChristopher Ferris REALLOC_ZERO_BYTES_FREES    default: not defined
436*2680e0c0SChristopher Ferris   This should be set if a call to realloc with zero bytes should
437*2680e0c0SChristopher Ferris   be the same as a call to free. Some people think it should. Otherwise,
438*2680e0c0SChristopher Ferris   since this malloc returns a unique pointer for malloc(0), so does
439*2680e0c0SChristopher Ferris   realloc(p, 0).
440*2680e0c0SChristopher Ferris 
441*2680e0c0SChristopher Ferris LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
442*2680e0c0SChristopher Ferris LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H,  LACKS_ERRNO_H
443*2680e0c0SChristopher Ferris LACKS_STDLIB_H LACKS_SCHED_H LACKS_TIME_H  default: NOT defined unless on WIN32
444*2680e0c0SChristopher Ferris   Define these if your system does not have these header files.
445*2680e0c0SChristopher Ferris   You might need to manually insert some of the declarations they provide.
446*2680e0c0SChristopher Ferris 
447*2680e0c0SChristopher Ferris DEFAULT_GRANULARITY        default: page size if MORECORE_CONTIGUOUS,
448*2680e0c0SChristopher Ferris                                 system_info.dwAllocationGranularity in WIN32,
449*2680e0c0SChristopher Ferris                                 otherwise 64K.
450*2680e0c0SChristopher Ferris       Also settable using mallopt(M_GRANULARITY, x)
451*2680e0c0SChristopher Ferris   The unit for allocating and deallocating memory from the system.  On
452*2680e0c0SChristopher Ferris   most systems with contiguous MORECORE, there is no reason to
453*2680e0c0SChristopher Ferris   make this more than a page. However, systems with MMAP tend to
454*2680e0c0SChristopher Ferris   either require or encourage larger granularities.  You can increase
455*2680e0c0SChristopher Ferris   this value to prevent system allocation functions to be called so
456*2680e0c0SChristopher Ferris   often, especially if they are slow.  The value must be at least one
457*2680e0c0SChristopher Ferris   page and must be a power of two.  Setting to 0 causes initialization
458*2680e0c0SChristopher Ferris   to either page size or win32 region size.  (Note: In previous
459*2680e0c0SChristopher Ferris   versions of malloc, the equivalent of this option was called
460*2680e0c0SChristopher Ferris   "TOP_PAD")
461*2680e0c0SChristopher Ferris 
462*2680e0c0SChristopher Ferris DEFAULT_TRIM_THRESHOLD    default: 2MB
463*2680e0c0SChristopher Ferris       Also settable using mallopt(M_TRIM_THRESHOLD, x)
464*2680e0c0SChristopher Ferris   The maximum amount of unused top-most memory to keep before
465*2680e0c0SChristopher Ferris   releasing via malloc_trim in free().  Automatic trimming is mainly
466*2680e0c0SChristopher Ferris   useful in long-lived programs using contiguous MORECORE.  Because
467*2680e0c0SChristopher Ferris   trimming via sbrk can be slow on some systems, and can sometimes be
468*2680e0c0SChristopher Ferris   wasteful (in cases where programs immediately afterward allocate
469*2680e0c0SChristopher Ferris   more large chunks) the value should be high enough so that your
470*2680e0c0SChristopher Ferris   overall system performance would improve by releasing this much
471*2680e0c0SChristopher Ferris   memory.  As a rough guide, you might set to a value close to the
472*2680e0c0SChristopher Ferris   average size of a process (program) running on your system.
473*2680e0c0SChristopher Ferris   Releasing this much memory would allow such a process to run in
474*2680e0c0SChristopher Ferris   memory.  Generally, it is worth tuning trim thresholds when a
475*2680e0c0SChristopher Ferris   program undergoes phases where several large chunks are allocated
476*2680e0c0SChristopher Ferris   and released in ways that can reuse each other's storage, perhaps
477*2680e0c0SChristopher Ferris   mixed with phases where there are no such chunks at all. The trim
478*2680e0c0SChristopher Ferris   value must be greater than page size to have any useful effect.  To
479*2680e0c0SChristopher Ferris   disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
480*2680e0c0SChristopher Ferris   some people use of mallocing a huge space and then freeing it at
481*2680e0c0SChristopher Ferris   program startup, in an attempt to reserve system memory, doesn't
482*2680e0c0SChristopher Ferris   have the intended effect under automatic trimming, since that memory
483*2680e0c0SChristopher Ferris   will immediately be returned to the system.
484*2680e0c0SChristopher Ferris 
485*2680e0c0SChristopher Ferris DEFAULT_MMAP_THRESHOLD       default: 256K
486*2680e0c0SChristopher Ferris       Also settable using mallopt(M_MMAP_THRESHOLD, x)
487*2680e0c0SChristopher Ferris   The request size threshold for using MMAP to directly service a
488*2680e0c0SChristopher Ferris   request. Requests of at least this size that cannot be allocated
489*2680e0c0SChristopher Ferris   using already-existing space will be serviced via mmap.  (If enough
490*2680e0c0SChristopher Ferris   normal freed space already exists it is used instead.)  Using mmap
491*2680e0c0SChristopher Ferris   segregates relatively large chunks of memory so that they can be
492*2680e0c0SChristopher Ferris   individually obtained and released from the host system. A request
493*2680e0c0SChristopher Ferris   serviced through mmap is never reused by any other request (at least
494*2680e0c0SChristopher Ferris   not directly; the system may just so happen to remap successive
495*2680e0c0SChristopher Ferris   requests to the same locations).  Segregating space in this way has
496*2680e0c0SChristopher Ferris   the benefits that: Mmapped space can always be individually released
497*2680e0c0SChristopher Ferris   back to the system, which helps keep the system level memory demands
498*2680e0c0SChristopher Ferris   of a long-lived program low.  Also, mapped memory doesn't become
499*2680e0c0SChristopher Ferris   `locked' between other chunks, as can happen with normally allocated
500*2680e0c0SChristopher Ferris   chunks, which means that even trimming via malloc_trim would not
501*2680e0c0SChristopher Ferris   release them.  However, it has the disadvantage that the space
502*2680e0c0SChristopher Ferris   cannot be reclaimed, consolidated, and then used to service later
503*2680e0c0SChristopher Ferris   requests, as happens with normal chunks.  The advantages of mmap
504*2680e0c0SChristopher Ferris   nearly always outweigh disadvantages for "large" chunks, but the
505*2680e0c0SChristopher Ferris   value of "large" may vary across systems.  The default is an
506*2680e0c0SChristopher Ferris   empirically derived value that works well in most systems. You can
507*2680e0c0SChristopher Ferris   disable mmap by setting to MAX_SIZE_T.
508*2680e0c0SChristopher Ferris 
509*2680e0c0SChristopher Ferris MAX_RELEASE_CHECK_RATE   default: 4095 unless not HAVE_MMAP
510*2680e0c0SChristopher Ferris   The number of consolidated frees between checks to release
511*2680e0c0SChristopher Ferris   unused segments when freeing. When using non-contiguous segments,
512*2680e0c0SChristopher Ferris   especially with multiple mspaces, checking only for topmost space
513*2680e0c0SChristopher Ferris   doesn't always suffice to trigger trimming. To compensate for this,
514*2680e0c0SChristopher Ferris   free() will, with a period of MAX_RELEASE_CHECK_RATE (or the
515*2680e0c0SChristopher Ferris   current number of segments, if greater) try to release unused
516*2680e0c0SChristopher Ferris   segments to the OS when freeing chunks that result in
517*2680e0c0SChristopher Ferris   consolidation. The best value for this parameter is a compromise
518*2680e0c0SChristopher Ferris   between slowing down frees with relatively costly checks that
519*2680e0c0SChristopher Ferris   rarely trigger versus holding on to unused memory. To effectively
520*2680e0c0SChristopher Ferris   disable, set to MAX_SIZE_T. This may lead to a very slight speed
521*2680e0c0SChristopher Ferris   improvement at the expense of carrying around more memory.
522*2680e0c0SChristopher Ferris */
523*2680e0c0SChristopher Ferris 
524*2680e0c0SChristopher Ferris /* Version identifier to allow people to support multiple versions */
525*2680e0c0SChristopher Ferris #ifndef DLMALLOC_VERSION
526*2680e0c0SChristopher Ferris #define DLMALLOC_VERSION 20806
527*2680e0c0SChristopher Ferris #endif /* DLMALLOC_VERSION */
528*2680e0c0SChristopher Ferris 
529*2680e0c0SChristopher Ferris #ifndef DLMALLOC_EXPORT
530*2680e0c0SChristopher Ferris #define DLMALLOC_EXPORT extern
531*2680e0c0SChristopher Ferris #endif
532*2680e0c0SChristopher Ferris 
533*2680e0c0SChristopher Ferris #ifndef WIN32
534*2680e0c0SChristopher Ferris #ifdef _WIN32
535*2680e0c0SChristopher Ferris #define WIN32 1
536*2680e0c0SChristopher Ferris #endif  /* _WIN32 */
537*2680e0c0SChristopher Ferris #ifdef _WIN32_WCE
538*2680e0c0SChristopher Ferris #define LACKS_FCNTL_H
539*2680e0c0SChristopher Ferris #define WIN32 1
540*2680e0c0SChristopher Ferris #endif /* _WIN32_WCE */
541*2680e0c0SChristopher Ferris #endif  /* WIN32 */
542*2680e0c0SChristopher Ferris #ifdef WIN32
543*2680e0c0SChristopher Ferris #define WIN32_LEAN_AND_MEAN
544*2680e0c0SChristopher Ferris #include <windows.h>
545*2680e0c0SChristopher Ferris #include <tchar.h>
546*2680e0c0SChristopher Ferris #define HAVE_MMAP 1
547*2680e0c0SChristopher Ferris #define HAVE_MORECORE 0
548*2680e0c0SChristopher Ferris #define LACKS_UNISTD_H
549*2680e0c0SChristopher Ferris #define LACKS_SYS_PARAM_H
550*2680e0c0SChristopher Ferris #define LACKS_SYS_MMAN_H
551*2680e0c0SChristopher Ferris #define LACKS_STRING_H
552*2680e0c0SChristopher Ferris #define LACKS_STRINGS_H
553*2680e0c0SChristopher Ferris #define LACKS_SYS_TYPES_H
554*2680e0c0SChristopher Ferris #define LACKS_ERRNO_H
555*2680e0c0SChristopher Ferris #define LACKS_SCHED_H
556*2680e0c0SChristopher Ferris #ifndef MALLOC_FAILURE_ACTION
557*2680e0c0SChristopher Ferris #define MALLOC_FAILURE_ACTION
558*2680e0c0SChristopher Ferris #endif /* MALLOC_FAILURE_ACTION */
559*2680e0c0SChristopher Ferris #ifndef MMAP_CLEARS
560*2680e0c0SChristopher Ferris #ifdef _WIN32_WCE /* WINCE reportedly does not clear */
561*2680e0c0SChristopher Ferris #define MMAP_CLEARS 0
562*2680e0c0SChristopher Ferris #else
563*2680e0c0SChristopher Ferris #define MMAP_CLEARS 1
564*2680e0c0SChristopher Ferris #endif /* _WIN32_WCE */
565*2680e0c0SChristopher Ferris #endif /*MMAP_CLEARS */
566*2680e0c0SChristopher Ferris #endif  /* WIN32 */
567*2680e0c0SChristopher Ferris 
568*2680e0c0SChristopher Ferris #if defined(DARWIN) || defined(_DARWIN)
569*2680e0c0SChristopher Ferris /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
570*2680e0c0SChristopher Ferris #ifndef HAVE_MORECORE
571*2680e0c0SChristopher Ferris #define HAVE_MORECORE 0
572*2680e0c0SChristopher Ferris #define HAVE_MMAP 1
573*2680e0c0SChristopher Ferris /* OSX allocators provide 16 byte alignment */
574*2680e0c0SChristopher Ferris #ifndef MALLOC_ALIGNMENT
575*2680e0c0SChristopher Ferris #define MALLOC_ALIGNMENT ((size_t)16U)
576*2680e0c0SChristopher Ferris #endif
577*2680e0c0SChristopher Ferris #endif  /* HAVE_MORECORE */
578*2680e0c0SChristopher Ferris #endif  /* DARWIN */
579*2680e0c0SChristopher Ferris 
580*2680e0c0SChristopher Ferris #ifndef LACKS_SYS_TYPES_H
581*2680e0c0SChristopher Ferris #include <sys/types.h>  /* For size_t */
582*2680e0c0SChristopher Ferris #endif  /* LACKS_SYS_TYPES_H */
583*2680e0c0SChristopher Ferris 
584*2680e0c0SChristopher Ferris /* The maximum possible size_t value has all bits set */
585*2680e0c0SChristopher Ferris #define MAX_SIZE_T           (~(size_t)0)
586*2680e0c0SChristopher Ferris 
587*2680e0c0SChristopher Ferris #ifndef USE_LOCKS /* ensure true if spin or recursive locks set */
588*2680e0c0SChristopher Ferris #define USE_LOCKS  ((defined(USE_SPIN_LOCKS) && USE_SPIN_LOCKS != 0) || \
589*2680e0c0SChristopher Ferris                     (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0))
590*2680e0c0SChristopher Ferris #endif /* USE_LOCKS */
591*2680e0c0SChristopher Ferris 
592*2680e0c0SChristopher Ferris #if USE_LOCKS /* Spin locks for gcc >= 4.1, older gcc on x86, MSC >= 1310 */
593*2680e0c0SChristopher Ferris #if ((defined(__GNUC__) &&                                              \
594*2680e0c0SChristopher Ferris       ((__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1)) ||      \
595*2680e0c0SChristopher Ferris        defined(__i386__) || defined(__x86_64__))) ||                    \
596*2680e0c0SChristopher Ferris      (defined(_MSC_VER) && _MSC_VER>=1310))
597*2680e0c0SChristopher Ferris #ifndef USE_SPIN_LOCKS
598*2680e0c0SChristopher Ferris #define USE_SPIN_LOCKS 1
599*2680e0c0SChristopher Ferris #endif /* USE_SPIN_LOCKS */
600*2680e0c0SChristopher Ferris #elif USE_SPIN_LOCKS
601*2680e0c0SChristopher Ferris #error "USE_SPIN_LOCKS defined without implementation"
602*2680e0c0SChristopher Ferris #endif /* ... locks available... */
603*2680e0c0SChristopher Ferris #elif !defined(USE_SPIN_LOCKS)
604*2680e0c0SChristopher Ferris #define USE_SPIN_LOCKS 0
605*2680e0c0SChristopher Ferris #endif /* USE_LOCKS */
606*2680e0c0SChristopher Ferris 
607*2680e0c0SChristopher Ferris #ifndef ONLY_MSPACES
608*2680e0c0SChristopher Ferris #define ONLY_MSPACES 0
609*2680e0c0SChristopher Ferris #endif  /* ONLY_MSPACES */
610*2680e0c0SChristopher Ferris #ifndef MSPACES
611*2680e0c0SChristopher Ferris #if ONLY_MSPACES
612*2680e0c0SChristopher Ferris #define MSPACES 1
613*2680e0c0SChristopher Ferris #else   /* ONLY_MSPACES */
614*2680e0c0SChristopher Ferris #define MSPACES 0
615*2680e0c0SChristopher Ferris #endif  /* ONLY_MSPACES */
616*2680e0c0SChristopher Ferris #endif  /* MSPACES */
617*2680e0c0SChristopher Ferris #ifndef MALLOC_ALIGNMENT
618*2680e0c0SChristopher Ferris #define MALLOC_ALIGNMENT ((size_t)(2 * sizeof(void *)))
619*2680e0c0SChristopher Ferris #endif  /* MALLOC_ALIGNMENT */
620*2680e0c0SChristopher Ferris #ifndef FOOTERS
621*2680e0c0SChristopher Ferris #define FOOTERS 0
622*2680e0c0SChristopher Ferris #endif  /* FOOTERS */
623*2680e0c0SChristopher Ferris #ifndef ABORT
624*2680e0c0SChristopher Ferris #define ABORT  abort()
625*2680e0c0SChristopher Ferris #endif  /* ABORT */
626*2680e0c0SChristopher Ferris #ifndef ABORT_ON_ASSERT_FAILURE
627*2680e0c0SChristopher Ferris #define ABORT_ON_ASSERT_FAILURE 1
628*2680e0c0SChristopher Ferris #endif  /* ABORT_ON_ASSERT_FAILURE */
629*2680e0c0SChristopher Ferris #ifndef PROCEED_ON_ERROR
630*2680e0c0SChristopher Ferris #define PROCEED_ON_ERROR 0
631*2680e0c0SChristopher Ferris #endif  /* PROCEED_ON_ERROR */
632*2680e0c0SChristopher Ferris 
633*2680e0c0SChristopher Ferris #ifndef INSECURE
634*2680e0c0SChristopher Ferris #define INSECURE 0
635*2680e0c0SChristopher Ferris #endif  /* INSECURE */
636*2680e0c0SChristopher Ferris #ifndef MALLOC_INSPECT_ALL
637*2680e0c0SChristopher Ferris #define MALLOC_INSPECT_ALL 0
638*2680e0c0SChristopher Ferris #endif  /* MALLOC_INSPECT_ALL */
639*2680e0c0SChristopher Ferris #ifndef HAVE_MMAP
640*2680e0c0SChristopher Ferris #define HAVE_MMAP 1
641*2680e0c0SChristopher Ferris #endif  /* HAVE_MMAP */
642*2680e0c0SChristopher Ferris #ifndef MMAP_CLEARS
643*2680e0c0SChristopher Ferris #define MMAP_CLEARS 1
644*2680e0c0SChristopher Ferris #endif  /* MMAP_CLEARS */
645*2680e0c0SChristopher Ferris #ifndef HAVE_MREMAP
646*2680e0c0SChristopher Ferris #ifdef linux
647*2680e0c0SChristopher Ferris #define HAVE_MREMAP 1
648*2680e0c0SChristopher Ferris #define _GNU_SOURCE /* Turns on mremap() definition */
649*2680e0c0SChristopher Ferris #else   /* linux */
650*2680e0c0SChristopher Ferris #define HAVE_MREMAP 0
651*2680e0c0SChristopher Ferris #endif  /* linux */
652*2680e0c0SChristopher Ferris #endif  /* HAVE_MREMAP */
653*2680e0c0SChristopher Ferris #ifndef MALLOC_FAILURE_ACTION
654*2680e0c0SChristopher Ferris #define MALLOC_FAILURE_ACTION  errno = ENOMEM;
655*2680e0c0SChristopher Ferris #endif  /* MALLOC_FAILURE_ACTION */
656*2680e0c0SChristopher Ferris #ifndef HAVE_MORECORE
657*2680e0c0SChristopher Ferris #if ONLY_MSPACES
658*2680e0c0SChristopher Ferris #define HAVE_MORECORE 0
659*2680e0c0SChristopher Ferris #else   /* ONLY_MSPACES */
660*2680e0c0SChristopher Ferris #define HAVE_MORECORE 1
661*2680e0c0SChristopher Ferris #endif  /* ONLY_MSPACES */
662*2680e0c0SChristopher Ferris #endif  /* HAVE_MORECORE */
663*2680e0c0SChristopher Ferris #if !HAVE_MORECORE
664*2680e0c0SChristopher Ferris #define MORECORE_CONTIGUOUS 0
665*2680e0c0SChristopher Ferris #else   /* !HAVE_MORECORE */
666*2680e0c0SChristopher Ferris #define MORECORE_DEFAULT sbrk
667*2680e0c0SChristopher Ferris #ifndef MORECORE_CONTIGUOUS
668*2680e0c0SChristopher Ferris #define MORECORE_CONTIGUOUS 1
669*2680e0c0SChristopher Ferris #endif  /* MORECORE_CONTIGUOUS */
670*2680e0c0SChristopher Ferris #endif  /* HAVE_MORECORE */
671*2680e0c0SChristopher Ferris #ifndef DEFAULT_GRANULARITY
672*2680e0c0SChristopher Ferris #if (MORECORE_CONTIGUOUS || defined(WIN32))
673*2680e0c0SChristopher Ferris #define DEFAULT_GRANULARITY (0)  /* 0 means to compute in init_mparams */
674*2680e0c0SChristopher Ferris #else   /* MORECORE_CONTIGUOUS */
675*2680e0c0SChristopher Ferris #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
676*2680e0c0SChristopher Ferris #endif  /* MORECORE_CONTIGUOUS */
677*2680e0c0SChristopher Ferris #endif  /* DEFAULT_GRANULARITY */
678*2680e0c0SChristopher Ferris #ifndef DEFAULT_TRIM_THRESHOLD
679*2680e0c0SChristopher Ferris #ifndef MORECORE_CANNOT_TRIM
680*2680e0c0SChristopher Ferris #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
681*2680e0c0SChristopher Ferris #else   /* MORECORE_CANNOT_TRIM */
682*2680e0c0SChristopher Ferris #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
683*2680e0c0SChristopher Ferris #endif  /* MORECORE_CANNOT_TRIM */
684*2680e0c0SChristopher Ferris #endif  /* DEFAULT_TRIM_THRESHOLD */
685*2680e0c0SChristopher Ferris #ifndef DEFAULT_MMAP_THRESHOLD
686*2680e0c0SChristopher Ferris #if HAVE_MMAP
687*2680e0c0SChristopher Ferris #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
688*2680e0c0SChristopher Ferris #else   /* HAVE_MMAP */
689*2680e0c0SChristopher Ferris #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
690*2680e0c0SChristopher Ferris #endif  /* HAVE_MMAP */
691*2680e0c0SChristopher Ferris #endif  /* DEFAULT_MMAP_THRESHOLD */
692*2680e0c0SChristopher Ferris #ifndef MAX_RELEASE_CHECK_RATE
693*2680e0c0SChristopher Ferris #if HAVE_MMAP
694*2680e0c0SChristopher Ferris #define MAX_RELEASE_CHECK_RATE 4095
695*2680e0c0SChristopher Ferris #else
696*2680e0c0SChristopher Ferris #define MAX_RELEASE_CHECK_RATE MAX_SIZE_T
697*2680e0c0SChristopher Ferris #endif /* HAVE_MMAP */
698*2680e0c0SChristopher Ferris #endif /* MAX_RELEASE_CHECK_RATE */
699*2680e0c0SChristopher Ferris #ifndef USE_BUILTIN_FFS
700*2680e0c0SChristopher Ferris #define USE_BUILTIN_FFS 0
701*2680e0c0SChristopher Ferris #endif  /* USE_BUILTIN_FFS */
702*2680e0c0SChristopher Ferris #ifndef USE_DEV_RANDOM
703*2680e0c0SChristopher Ferris #define USE_DEV_RANDOM 0
704*2680e0c0SChristopher Ferris #endif  /* USE_DEV_RANDOM */
705*2680e0c0SChristopher Ferris #ifndef NO_MALLINFO
706*2680e0c0SChristopher Ferris #define NO_MALLINFO 0
707*2680e0c0SChristopher Ferris #endif  /* NO_MALLINFO */
708*2680e0c0SChristopher Ferris #ifndef MALLINFO_FIELD_TYPE
709*2680e0c0SChristopher Ferris #define MALLINFO_FIELD_TYPE size_t
710*2680e0c0SChristopher Ferris #endif  /* MALLINFO_FIELD_TYPE */
711*2680e0c0SChristopher Ferris #ifndef NO_MALLOC_STATS
712*2680e0c0SChristopher Ferris #define NO_MALLOC_STATS 0
713*2680e0c0SChristopher Ferris #endif  /* NO_MALLOC_STATS */
714*2680e0c0SChristopher Ferris #ifndef NO_SEGMENT_TRAVERSAL
715*2680e0c0SChristopher Ferris #define NO_SEGMENT_TRAVERSAL 0
716*2680e0c0SChristopher Ferris #endif /* NO_SEGMENT_TRAVERSAL */
717*2680e0c0SChristopher Ferris 
718*2680e0c0SChristopher Ferris /*
719*2680e0c0SChristopher Ferris   mallopt tuning options.  SVID/XPG defines four standard parameter
720*2680e0c0SChristopher Ferris   numbers for mallopt, normally defined in malloc.h.  None of these
721*2680e0c0SChristopher Ferris   are used in this malloc, so setting them has no effect. But this
722*2680e0c0SChristopher Ferris   malloc does support the following options.
723*2680e0c0SChristopher Ferris */
724*2680e0c0SChristopher Ferris 
725*2680e0c0SChristopher Ferris #define M_TRIM_THRESHOLD     (-1)
726*2680e0c0SChristopher Ferris #define M_GRANULARITY        (-2)
727*2680e0c0SChristopher Ferris #define M_MMAP_THRESHOLD     (-3)
728*2680e0c0SChristopher Ferris 
729*2680e0c0SChristopher Ferris /* ------------------------ Mallinfo declarations ------------------------ */
730*2680e0c0SChristopher Ferris 
731*2680e0c0SChristopher Ferris #if !NO_MALLINFO
732*2680e0c0SChristopher Ferris /*
733*2680e0c0SChristopher Ferris   This version of malloc supports the standard SVID/XPG mallinfo
734*2680e0c0SChristopher Ferris   routine that returns a struct containing usage properties and
735*2680e0c0SChristopher Ferris   statistics. It should work on any system that has a
736*2680e0c0SChristopher Ferris   /usr/include/malloc.h defining struct mallinfo.  The main
737*2680e0c0SChristopher Ferris   declaration needed is the mallinfo struct that is returned (by-copy)
738*2680e0c0SChristopher Ferris   by mallinfo().  The malloinfo struct contains a bunch of fields that
739*2680e0c0SChristopher Ferris   are not even meaningful in this version of malloc.  These fields are
740*2680e0c0SChristopher Ferris   are instead filled by mallinfo() with other numbers that might be of
741*2680e0c0SChristopher Ferris   interest.
742*2680e0c0SChristopher Ferris 
743*2680e0c0SChristopher Ferris   HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
744*2680e0c0SChristopher Ferris   /usr/include/malloc.h file that includes a declaration of struct
745*2680e0c0SChristopher Ferris   mallinfo.  If so, it is included; else a compliant version is
746*2680e0c0SChristopher Ferris   declared below.  These must be precisely the same for mallinfo() to
747*2680e0c0SChristopher Ferris   work.  The original SVID version of this struct, defined on most
748*2680e0c0SChristopher Ferris   systems with mallinfo, declares all fields as ints. But some others
749*2680e0c0SChristopher Ferris   define as unsigned long. If your system defines the fields using a
750*2680e0c0SChristopher Ferris   type of different width than listed here, you MUST #include your
751*2680e0c0SChristopher Ferris   system version and #define HAVE_USR_INCLUDE_MALLOC_H.
752*2680e0c0SChristopher Ferris */
753*2680e0c0SChristopher Ferris 
754*2680e0c0SChristopher Ferris /* #define HAVE_USR_INCLUDE_MALLOC_H */
755*2680e0c0SChristopher Ferris 
756*2680e0c0SChristopher Ferris #ifdef HAVE_USR_INCLUDE_MALLOC_H
757*2680e0c0SChristopher Ferris #include "/usr/include/malloc.h"
758*2680e0c0SChristopher Ferris #else /* HAVE_USR_INCLUDE_MALLOC_H */
759*2680e0c0SChristopher Ferris #ifndef STRUCT_MALLINFO_DECLARED
760*2680e0c0SChristopher Ferris /* HP-UX (and others?) redefines mallinfo unless _STRUCT_MALLINFO is defined */
761*2680e0c0SChristopher Ferris #define _STRUCT_MALLINFO
762*2680e0c0SChristopher Ferris #define STRUCT_MALLINFO_DECLARED 1
763*2680e0c0SChristopher Ferris struct mallinfo {
764*2680e0c0SChristopher Ferris   MALLINFO_FIELD_TYPE arena;    /* non-mmapped space allocated from system */
765*2680e0c0SChristopher Ferris   MALLINFO_FIELD_TYPE ordblks;  /* number of free chunks */
766*2680e0c0SChristopher Ferris   MALLINFO_FIELD_TYPE smblks;   /* always 0 */
767*2680e0c0SChristopher Ferris   MALLINFO_FIELD_TYPE hblks;    /* always 0 */
768*2680e0c0SChristopher Ferris   MALLINFO_FIELD_TYPE hblkhd;   /* space in mmapped regions */
769*2680e0c0SChristopher Ferris   MALLINFO_FIELD_TYPE usmblks;  /* maximum total allocated space */
770*2680e0c0SChristopher Ferris   MALLINFO_FIELD_TYPE fsmblks;  /* always 0 */
771*2680e0c0SChristopher Ferris   MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
772*2680e0c0SChristopher Ferris   MALLINFO_FIELD_TYPE fordblks; /* total free space */
773*2680e0c0SChristopher Ferris   MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
774*2680e0c0SChristopher Ferris };
775*2680e0c0SChristopher Ferris #endif /* STRUCT_MALLINFO_DECLARED */
776*2680e0c0SChristopher Ferris #endif /* HAVE_USR_INCLUDE_MALLOC_H */
777*2680e0c0SChristopher Ferris #endif /* NO_MALLINFO */
778*2680e0c0SChristopher Ferris 
779*2680e0c0SChristopher Ferris /*
780*2680e0c0SChristopher Ferris   Try to persuade compilers to inline. The most critical functions for
781*2680e0c0SChristopher Ferris   inlining are defined as macros, so these aren't used for them.
782*2680e0c0SChristopher Ferris */
783*2680e0c0SChristopher Ferris 
784*2680e0c0SChristopher Ferris #ifndef FORCEINLINE
785*2680e0c0SChristopher Ferris   #if defined(__GNUC__)
786*2680e0c0SChristopher Ferris #define FORCEINLINE __inline __attribute__ ((always_inline))
787*2680e0c0SChristopher Ferris   #elif defined(_MSC_VER)
788*2680e0c0SChristopher Ferris     #define FORCEINLINE __forceinline
789*2680e0c0SChristopher Ferris   #endif
790*2680e0c0SChristopher Ferris #endif
791*2680e0c0SChristopher Ferris #ifndef NOINLINE
792*2680e0c0SChristopher Ferris   #if defined(__GNUC__)
793*2680e0c0SChristopher Ferris     #define NOINLINE __attribute__ ((noinline))
794*2680e0c0SChristopher Ferris   #elif defined(_MSC_VER)
795*2680e0c0SChristopher Ferris     #define NOINLINE __declspec(noinline)
796*2680e0c0SChristopher Ferris   #else
797*2680e0c0SChristopher Ferris     #define NOINLINE
798*2680e0c0SChristopher Ferris   #endif
799*2680e0c0SChristopher Ferris #endif
800*2680e0c0SChristopher Ferris 
801*2680e0c0SChristopher Ferris #ifdef __cplusplus
802*2680e0c0SChristopher Ferris extern "C" {
803*2680e0c0SChristopher Ferris #ifndef FORCEINLINE
804*2680e0c0SChristopher Ferris  #define FORCEINLINE inline
805*2680e0c0SChristopher Ferris #endif
806*2680e0c0SChristopher Ferris #endif /* __cplusplus */
807*2680e0c0SChristopher Ferris #ifndef FORCEINLINE
808*2680e0c0SChristopher Ferris  #define FORCEINLINE
809*2680e0c0SChristopher Ferris #endif
810*2680e0c0SChristopher Ferris 
811*2680e0c0SChristopher Ferris #if !ONLY_MSPACES
812*2680e0c0SChristopher Ferris 
813*2680e0c0SChristopher Ferris /* ------------------- Declarations of public routines ------------------- */
814*2680e0c0SChristopher Ferris 
815*2680e0c0SChristopher Ferris #ifndef USE_DL_PREFIX
816*2680e0c0SChristopher Ferris #define dlcalloc               calloc
817*2680e0c0SChristopher Ferris #define dlfree                 free
818*2680e0c0SChristopher Ferris #define dlmalloc               malloc
819*2680e0c0SChristopher Ferris #define dlmemalign             memalign
820*2680e0c0SChristopher Ferris #define dlposix_memalign       posix_memalign
821*2680e0c0SChristopher Ferris #define dlrealloc              realloc
822*2680e0c0SChristopher Ferris #define dlrealloc_in_place     realloc_in_place
823*2680e0c0SChristopher Ferris #define dlvalloc               valloc
824*2680e0c0SChristopher Ferris #define dlpvalloc              pvalloc
825*2680e0c0SChristopher Ferris #define dlmallinfo             mallinfo
826*2680e0c0SChristopher Ferris #define dlmallopt              mallopt
827*2680e0c0SChristopher Ferris #define dlmalloc_trim          malloc_trim
828*2680e0c0SChristopher Ferris #define dlmalloc_stats         malloc_stats
829*2680e0c0SChristopher Ferris #define dlmalloc_usable_size   malloc_usable_size
830*2680e0c0SChristopher Ferris #define dlmalloc_footprint     malloc_footprint
831*2680e0c0SChristopher Ferris #define dlmalloc_max_footprint malloc_max_footprint
832*2680e0c0SChristopher Ferris #define dlmalloc_footprint_limit malloc_footprint_limit
833*2680e0c0SChristopher Ferris #define dlmalloc_set_footprint_limit malloc_set_footprint_limit
834*2680e0c0SChristopher Ferris #define dlmalloc_inspect_all   malloc_inspect_all
835*2680e0c0SChristopher Ferris #define dlindependent_calloc   independent_calloc
836*2680e0c0SChristopher Ferris #define dlindependent_comalloc independent_comalloc
837*2680e0c0SChristopher Ferris #define dlbulk_free            bulk_free
838*2680e0c0SChristopher Ferris #endif /* USE_DL_PREFIX */
839*2680e0c0SChristopher Ferris 
840*2680e0c0SChristopher Ferris /*
841*2680e0c0SChristopher Ferris   malloc(size_t n)
842*2680e0c0SChristopher Ferris   Returns a pointer to a newly allocated chunk of at least n bytes, or
843*2680e0c0SChristopher Ferris   null if no space is available, in which case errno is set to ENOMEM
844*2680e0c0SChristopher Ferris   on ANSI C systems.
845*2680e0c0SChristopher Ferris 
846*2680e0c0SChristopher Ferris   If n is zero, malloc returns a minimum-sized chunk. (The minimum
847*2680e0c0SChristopher Ferris   size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
848*2680e0c0SChristopher Ferris   systems.)  Note that size_t is an unsigned type, so calls with
849*2680e0c0SChristopher Ferris   arguments that would be negative if signed are interpreted as
850*2680e0c0SChristopher Ferris   requests for huge amounts of space, which will often fail. The
851*2680e0c0SChristopher Ferris   maximum supported value of n differs across systems, but is in all
852*2680e0c0SChristopher Ferris   cases less than the maximum representable value of a size_t.
853*2680e0c0SChristopher Ferris */
854*2680e0c0SChristopher Ferris DLMALLOC_EXPORT void* dlmalloc(size_t);
855*2680e0c0SChristopher Ferris 
856*2680e0c0SChristopher Ferris /*
857*2680e0c0SChristopher Ferris   free(void* p)
858*2680e0c0SChristopher Ferris   Releases the chunk of memory pointed to by p, that had been previously
859*2680e0c0SChristopher Ferris   allocated using malloc or a related routine such as realloc.
860*2680e0c0SChristopher Ferris   It has no effect if p is null. If p was not malloced or already
861*2680e0c0SChristopher Ferris   freed, free(p) will by default cause the current program to abort.
862*2680e0c0SChristopher Ferris */
863*2680e0c0SChristopher Ferris DLMALLOC_EXPORT void  dlfree(void*);
864*2680e0c0SChristopher Ferris 
865*2680e0c0SChristopher Ferris /*
866*2680e0c0SChristopher Ferris   calloc(size_t n_elements, size_t element_size);
867*2680e0c0SChristopher Ferris   Returns a pointer to n_elements * element_size bytes, with all locations
868*2680e0c0SChristopher Ferris   set to zero.
869*2680e0c0SChristopher Ferris */
870*2680e0c0SChristopher Ferris DLMALLOC_EXPORT void* dlcalloc(size_t, size_t);
871*2680e0c0SChristopher Ferris 
872*2680e0c0SChristopher Ferris /*
873*2680e0c0SChristopher Ferris   realloc(void* p, size_t n)
874*2680e0c0SChristopher Ferris   Returns a pointer to a chunk of size n that contains the same data
875*2680e0c0SChristopher Ferris   as does chunk p up to the minimum of (n, p's size) bytes, or null
876*2680e0c0SChristopher Ferris   if no space is available.
877*2680e0c0SChristopher Ferris 
878*2680e0c0SChristopher Ferris   The returned pointer may or may not be the same as p. The algorithm
879*2680e0c0SChristopher Ferris   prefers extending p in most cases when possible, otherwise it
880*2680e0c0SChristopher Ferris   employs the equivalent of a malloc-copy-free sequence.
881*2680e0c0SChristopher Ferris 
882*2680e0c0SChristopher Ferris   If p is null, realloc is equivalent to malloc.
883*2680e0c0SChristopher Ferris 
884*2680e0c0SChristopher Ferris   If space is not available, realloc returns null, errno is set (if on
885*2680e0c0SChristopher Ferris   ANSI) and p is NOT freed.
886*2680e0c0SChristopher Ferris 
887*2680e0c0SChristopher Ferris   if n is for fewer bytes than already held by p, the newly unused
888*2680e0c0SChristopher Ferris   space is lopped off and freed if possible.  realloc with a size
889*2680e0c0SChristopher Ferris   argument of zero (re)allocates a minimum-sized chunk.
890*2680e0c0SChristopher Ferris 
891*2680e0c0SChristopher Ferris   The old unix realloc convention of allowing the last-free'd chunk
892*2680e0c0SChristopher Ferris   to be used as an argument to realloc is not supported.
893*2680e0c0SChristopher Ferris */
894*2680e0c0SChristopher Ferris DLMALLOC_EXPORT void* dlrealloc(void*, size_t);
895*2680e0c0SChristopher Ferris 
896*2680e0c0SChristopher Ferris /*
897*2680e0c0SChristopher Ferris   realloc_in_place(void* p, size_t n)
898*2680e0c0SChristopher Ferris   Resizes the space allocated for p to size n, only if this can be
899*2680e0c0SChristopher Ferris   done without moving p (i.e., only if there is adjacent space
900*2680e0c0SChristopher Ferris   available if n is greater than p's current allocated size, or n is
901*2680e0c0SChristopher Ferris   less than or equal to p's size). This may be used instead of plain
902*2680e0c0SChristopher Ferris   realloc if an alternative allocation strategy is needed upon failure
903*2680e0c0SChristopher Ferris   to expand space; for example, reallocation of a buffer that must be
904*2680e0c0SChristopher Ferris   memory-aligned or cleared. You can use realloc_in_place to trigger
905*2680e0c0SChristopher Ferris   these alternatives only when needed.
906*2680e0c0SChristopher Ferris 
907*2680e0c0SChristopher Ferris   Returns p if successful; otherwise null.
908*2680e0c0SChristopher Ferris */
909*2680e0c0SChristopher Ferris DLMALLOC_EXPORT void* dlrealloc_in_place(void*, size_t);
910*2680e0c0SChristopher Ferris 
911*2680e0c0SChristopher Ferris /*
912*2680e0c0SChristopher Ferris   memalign(size_t alignment, size_t n);
913*2680e0c0SChristopher Ferris   Returns a pointer to a newly allocated chunk of n bytes, aligned
914*2680e0c0SChristopher Ferris   in accord with the alignment argument.
915*2680e0c0SChristopher Ferris 
916*2680e0c0SChristopher Ferris   The alignment argument should be a power of two. If the argument is
917*2680e0c0SChristopher Ferris   not a power of two, the nearest greater power is used.
918*2680e0c0SChristopher Ferris   8-byte alignment is guaranteed by normal malloc calls, so don't
919*2680e0c0SChristopher Ferris   bother calling memalign with an argument of 8 or less.
920*2680e0c0SChristopher Ferris 
921*2680e0c0SChristopher Ferris   Overreliance on memalign is a sure way to fragment space.
922*2680e0c0SChristopher Ferris */
923*2680e0c0SChristopher Ferris DLMALLOC_EXPORT void* dlmemalign(size_t, size_t);
924*2680e0c0SChristopher Ferris 
925*2680e0c0SChristopher Ferris /*
926*2680e0c0SChristopher Ferris   int posix_memalign(void** pp, size_t alignment, size_t n);
927*2680e0c0SChristopher Ferris   Allocates a chunk of n bytes, aligned in accord with the alignment
928*2680e0c0SChristopher Ferris   argument. Differs from memalign only in that it (1) assigns the
929*2680e0c0SChristopher Ferris   allocated memory to *pp rather than returning it, (2) fails and
930*2680e0c0SChristopher Ferris   returns EINVAL if the alignment is not a power of two (3) fails and
931*2680e0c0SChristopher Ferris   returns ENOMEM if memory cannot be allocated.
932*2680e0c0SChristopher Ferris */
933*2680e0c0SChristopher Ferris DLMALLOC_EXPORT int dlposix_memalign(void**, size_t, size_t);
934*2680e0c0SChristopher Ferris 
935*2680e0c0SChristopher Ferris /*
936*2680e0c0SChristopher Ferris   valloc(size_t n);
937*2680e0c0SChristopher Ferris   Equivalent to memalign(pagesize, n), where pagesize is the page
938*2680e0c0SChristopher Ferris   size of the system. If the pagesize is unknown, 4096 is used.
939*2680e0c0SChristopher Ferris */
940*2680e0c0SChristopher Ferris DLMALLOC_EXPORT void* dlvalloc(size_t);
941*2680e0c0SChristopher Ferris 
942*2680e0c0SChristopher Ferris /*
943*2680e0c0SChristopher Ferris   mallopt(int parameter_number, int parameter_value)
944*2680e0c0SChristopher Ferris   Sets tunable parameters The format is to provide a
945*2680e0c0SChristopher Ferris   (parameter-number, parameter-value) pair.  mallopt then sets the
946*2680e0c0SChristopher Ferris   corresponding parameter to the argument value if it can (i.e., so
947*2680e0c0SChristopher Ferris   long as the value is meaningful), and returns 1 if successful else
948*2680e0c0SChristopher Ferris   0.  To workaround the fact that mallopt is specified to use int,
949*2680e0c0SChristopher Ferris   not size_t parameters, the value -1 is specially treated as the
950*2680e0c0SChristopher Ferris   maximum unsigned size_t value.
951*2680e0c0SChristopher Ferris 
952*2680e0c0SChristopher Ferris   SVID/XPG/ANSI defines four standard param numbers for mallopt,
953*2680e0c0SChristopher Ferris   normally defined in malloc.h.  None of these are use in this malloc,
954*2680e0c0SChristopher Ferris   so setting them has no effect. But this malloc also supports other
955*2680e0c0SChristopher Ferris   options in mallopt. See below for details.  Briefly, supported
956*2680e0c0SChristopher Ferris   parameters are as follows (listed defaults are for "typical"
957*2680e0c0SChristopher Ferris   configurations).
958*2680e0c0SChristopher Ferris 
959*2680e0c0SChristopher Ferris   Symbol            param #  default    allowed param values
960*2680e0c0SChristopher Ferris   M_TRIM_THRESHOLD     -1   2*1024*1024   any   (-1 disables)
961*2680e0c0SChristopher Ferris   M_GRANULARITY        -2     page size   any power of 2 >= page size
962*2680e0c0SChristopher Ferris   M_MMAP_THRESHOLD     -3      256*1024   any   (or 0 if no MMAP support)
963*2680e0c0SChristopher Ferris */
964*2680e0c0SChristopher Ferris DLMALLOC_EXPORT int dlmallopt(int, int);
965*2680e0c0SChristopher Ferris 
966*2680e0c0SChristopher Ferris /*
967*2680e0c0SChristopher Ferris   malloc_footprint();
968*2680e0c0SChristopher Ferris   Returns the number of bytes obtained from the system.  The total
969*2680e0c0SChristopher Ferris   number of bytes allocated by malloc, realloc etc., is less than this
970*2680e0c0SChristopher Ferris   value. Unlike mallinfo, this function returns only a precomputed
971*2680e0c0SChristopher Ferris   result, so can be called frequently to monitor memory consumption.
972*2680e0c0SChristopher Ferris   Even if locks are otherwise defined, this function does not use them,
973*2680e0c0SChristopher Ferris   so results might not be up to date.
974*2680e0c0SChristopher Ferris */
975*2680e0c0SChristopher Ferris DLMALLOC_EXPORT size_t dlmalloc_footprint(void);
976*2680e0c0SChristopher Ferris 
977*2680e0c0SChristopher Ferris /*
978*2680e0c0SChristopher Ferris   malloc_max_footprint();
979*2680e0c0SChristopher Ferris   Returns the maximum number of bytes obtained from the system. This
980*2680e0c0SChristopher Ferris   value will be greater than current footprint if deallocated space
981*2680e0c0SChristopher Ferris   has been reclaimed by the system. The peak number of bytes allocated
982*2680e0c0SChristopher Ferris   by malloc, realloc etc., is less than this value. Unlike mallinfo,
983*2680e0c0SChristopher Ferris   this function returns only a precomputed result, so can be called
984*2680e0c0SChristopher Ferris   frequently to monitor memory consumption.  Even if locks are
985*2680e0c0SChristopher Ferris   otherwise defined, this function does not use them, so results might
986*2680e0c0SChristopher Ferris   not be up to date.
987*2680e0c0SChristopher Ferris */
988*2680e0c0SChristopher Ferris DLMALLOC_EXPORT size_t dlmalloc_max_footprint(void);
989*2680e0c0SChristopher Ferris 
990*2680e0c0SChristopher Ferris /*
991*2680e0c0SChristopher Ferris   malloc_footprint_limit();
992*2680e0c0SChristopher Ferris   Returns the number of bytes that the heap is allowed to obtain from
993*2680e0c0SChristopher Ferris   the system, returning the last value returned by
994*2680e0c0SChristopher Ferris   malloc_set_footprint_limit, or the maximum size_t value if
995*2680e0c0SChristopher Ferris   never set. The returned value reflects a permission. There is no
996*2680e0c0SChristopher Ferris   guarantee that this number of bytes can actually be obtained from
997*2680e0c0SChristopher Ferris   the system.
998*2680e0c0SChristopher Ferris */
999*2680e0c0SChristopher Ferris DLMALLOC_EXPORT size_t dlmalloc_footprint_limit();
1000*2680e0c0SChristopher Ferris 
1001*2680e0c0SChristopher Ferris /*
1002*2680e0c0SChristopher Ferris   malloc_set_footprint_limit();
1003*2680e0c0SChristopher Ferris   Sets the maximum number of bytes to obtain from the system, causing
1004*2680e0c0SChristopher Ferris   failure returns from malloc and related functions upon attempts to
1005*2680e0c0SChristopher Ferris   exceed this value. The argument value may be subject to page
1006*2680e0c0SChristopher Ferris   rounding to an enforceable limit; this actual value is returned.
1007*2680e0c0SChristopher Ferris   Using an argument of the maximum possible size_t effectively
1008*2680e0c0SChristopher Ferris   disables checks. If the argument is less than or equal to the
1009*2680e0c0SChristopher Ferris   current malloc_footprint, then all future allocations that require
1010*2680e0c0SChristopher Ferris   additional system memory will fail. However, invocation cannot
1011*2680e0c0SChristopher Ferris   retroactively deallocate existing used memory.
1012*2680e0c0SChristopher Ferris */
1013*2680e0c0SChristopher Ferris DLMALLOC_EXPORT size_t dlmalloc_set_footprint_limit(size_t bytes);
1014*2680e0c0SChristopher Ferris 
1015*2680e0c0SChristopher Ferris #if MALLOC_INSPECT_ALL
1016*2680e0c0SChristopher Ferris /*
1017*2680e0c0SChristopher Ferris   malloc_inspect_all(void(*handler)(void *start,
1018*2680e0c0SChristopher Ferris                                     void *end,
1019*2680e0c0SChristopher Ferris                                     size_t used_bytes,
1020*2680e0c0SChristopher Ferris                                     void* callback_arg),
1021*2680e0c0SChristopher Ferris                       void* arg);
1022*2680e0c0SChristopher Ferris   Traverses the heap and calls the given handler for each managed
1023*2680e0c0SChristopher Ferris   region, skipping all bytes that are (or may be) used for bookkeeping
1024*2680e0c0SChristopher Ferris   purposes.  Traversal does not include include chunks that have been
1025*2680e0c0SChristopher Ferris   directly memory mapped. Each reported region begins at the start
1026*2680e0c0SChristopher Ferris   address, and continues up to but not including the end address.  The
1027*2680e0c0SChristopher Ferris   first used_bytes of the region contain allocated data. If
1028*2680e0c0SChristopher Ferris   used_bytes is zero, the region is unallocated. The handler is
1029*2680e0c0SChristopher Ferris   invoked with the given callback argument. If locks are defined, they
1030*2680e0c0SChristopher Ferris   are held during the entire traversal. It is a bad idea to invoke
1031*2680e0c0SChristopher Ferris   other malloc functions from within the handler.
1032*2680e0c0SChristopher Ferris 
1033*2680e0c0SChristopher Ferris   For example, to count the number of in-use chunks with size greater
1034*2680e0c0SChristopher Ferris   than 1000, you could write:
1035*2680e0c0SChristopher Ferris   static int count = 0;
1036*2680e0c0SChristopher Ferris   void count_chunks(void* start, void* end, size_t used, void* arg) {
1037*2680e0c0SChristopher Ferris     if (used >= 1000) ++count;
1038*2680e0c0SChristopher Ferris   }
1039*2680e0c0SChristopher Ferris   then:
1040*2680e0c0SChristopher Ferris     malloc_inspect_all(count_chunks, NULL);
1041*2680e0c0SChristopher Ferris 
1042*2680e0c0SChristopher Ferris   malloc_inspect_all is compiled only if MALLOC_INSPECT_ALL is defined.
1043*2680e0c0SChristopher Ferris */
1044*2680e0c0SChristopher Ferris DLMALLOC_EXPORT void dlmalloc_inspect_all(void(*handler)(void*, void *, size_t, void*),
1045*2680e0c0SChristopher Ferris                            void* arg);
1046*2680e0c0SChristopher Ferris 
1047*2680e0c0SChristopher Ferris #endif /* MALLOC_INSPECT_ALL */
1048*2680e0c0SChristopher Ferris 
1049*2680e0c0SChristopher Ferris #if !NO_MALLINFO
1050*2680e0c0SChristopher Ferris /*
1051*2680e0c0SChristopher Ferris   mallinfo()
1052*2680e0c0SChristopher Ferris   Returns (by copy) a struct containing various summary statistics:
1053*2680e0c0SChristopher Ferris 
1054*2680e0c0SChristopher Ferris   arena:     current total non-mmapped bytes allocated from system
1055*2680e0c0SChristopher Ferris   ordblks:   the number of free chunks
1056*2680e0c0SChristopher Ferris   smblks:    always zero.
1057*2680e0c0SChristopher Ferris   hblks:     current number of mmapped regions
1058*2680e0c0SChristopher Ferris   hblkhd:    total bytes held in mmapped regions
1059*2680e0c0SChristopher Ferris   usmblks:   the maximum total allocated space. This will be greater
1060*2680e0c0SChristopher Ferris                 than current total if trimming has occurred.
1061*2680e0c0SChristopher Ferris   fsmblks:   always zero
1062*2680e0c0SChristopher Ferris   uordblks:  current total allocated space (normal or mmapped)
1063*2680e0c0SChristopher Ferris   fordblks:  total free space
1064*2680e0c0SChristopher Ferris   keepcost:  the maximum number of bytes that could ideally be released
1065*2680e0c0SChristopher Ferris                back to system via malloc_trim. ("ideally" means that
1066*2680e0c0SChristopher Ferris                it ignores page restrictions etc.)
1067*2680e0c0SChristopher Ferris 
1068*2680e0c0SChristopher Ferris   Because these fields are ints, but internal bookkeeping may
1069*2680e0c0SChristopher Ferris   be kept as longs, the reported values may wrap around zero and
1070*2680e0c0SChristopher Ferris   thus be inaccurate.
1071*2680e0c0SChristopher Ferris */
1072*2680e0c0SChristopher Ferris DLMALLOC_EXPORT struct mallinfo dlmallinfo(void);
1073*2680e0c0SChristopher Ferris #endif /* NO_MALLINFO */
1074*2680e0c0SChristopher Ferris 
1075*2680e0c0SChristopher Ferris /*
1076*2680e0c0SChristopher Ferris   independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
1077*2680e0c0SChristopher Ferris 
1078*2680e0c0SChristopher Ferris   independent_calloc is similar to calloc, but instead of returning a
1079*2680e0c0SChristopher Ferris   single cleared space, it returns an array of pointers to n_elements
1080*2680e0c0SChristopher Ferris   independent elements that can hold contents of size elem_size, each
1081*2680e0c0SChristopher Ferris   of which starts out cleared, and can be independently freed,
1082*2680e0c0SChristopher Ferris   realloc'ed etc. The elements are guaranteed to be adjacently
1083*2680e0c0SChristopher Ferris   allocated (this is not guaranteed to occur with multiple callocs or
1084*2680e0c0SChristopher Ferris   mallocs), which may also improve cache locality in some
1085*2680e0c0SChristopher Ferris   applications.
1086*2680e0c0SChristopher Ferris 
1087*2680e0c0SChristopher Ferris   The "chunks" argument is optional (i.e., may be null, which is
1088*2680e0c0SChristopher Ferris   probably the most typical usage). If it is null, the returned array
1089*2680e0c0SChristopher Ferris   is itself dynamically allocated and should also be freed when it is
1090*2680e0c0SChristopher Ferris   no longer needed. Otherwise, the chunks array must be of at least
1091*2680e0c0SChristopher Ferris   n_elements in length. It is filled in with the pointers to the
1092*2680e0c0SChristopher Ferris   chunks.
1093*2680e0c0SChristopher Ferris 
1094*2680e0c0SChristopher Ferris   In either case, independent_calloc returns this pointer array, or
1095*2680e0c0SChristopher Ferris   null if the allocation failed.  If n_elements is zero and "chunks"
1096*2680e0c0SChristopher Ferris   is null, it returns a chunk representing an array with zero elements
1097*2680e0c0SChristopher Ferris   (which should be freed if not wanted).
1098*2680e0c0SChristopher Ferris 
1099*2680e0c0SChristopher Ferris   Each element must be freed when it is no longer needed. This can be
1100*2680e0c0SChristopher Ferris   done all at once using bulk_free.
1101*2680e0c0SChristopher Ferris 
1102*2680e0c0SChristopher Ferris   independent_calloc simplifies and speeds up implementations of many
1103*2680e0c0SChristopher Ferris   kinds of pools.  It may also be useful when constructing large data
1104*2680e0c0SChristopher Ferris   structures that initially have a fixed number of fixed-sized nodes,
1105*2680e0c0SChristopher Ferris   but the number is not known at compile time, and some of the nodes
1106*2680e0c0SChristopher Ferris   may later need to be freed. For example:
1107*2680e0c0SChristopher Ferris 
1108*2680e0c0SChristopher Ferris   struct Node { int item; struct Node* next; };
1109*2680e0c0SChristopher Ferris 
1110*2680e0c0SChristopher Ferris   struct Node* build_list() {
1111*2680e0c0SChristopher Ferris     struct Node** pool;
1112*2680e0c0SChristopher Ferris     int n = read_number_of_nodes_needed();
1113*2680e0c0SChristopher Ferris     if (n <= 0) return 0;
1114*2680e0c0SChristopher Ferris     pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
1115*2680e0c0SChristopher Ferris     if (pool == 0) die();
1116*2680e0c0SChristopher Ferris     // organize into a linked list...
1117*2680e0c0SChristopher Ferris     struct Node* first = pool[0];
1118*2680e0c0SChristopher Ferris     for (i = 0; i < n-1; ++i)
1119*2680e0c0SChristopher Ferris       pool[i]->next = pool[i+1];
1120*2680e0c0SChristopher Ferris     free(pool);     // Can now free the array (or not, if it is needed later)
1121*2680e0c0SChristopher Ferris     return first;
1122*2680e0c0SChristopher Ferris   }
1123*2680e0c0SChristopher Ferris */
1124*2680e0c0SChristopher Ferris DLMALLOC_EXPORT void** dlindependent_calloc(size_t, size_t, void**);
1125*2680e0c0SChristopher Ferris 
1126*2680e0c0SChristopher Ferris /*
1127*2680e0c0SChristopher Ferris   independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
1128*2680e0c0SChristopher Ferris 
1129*2680e0c0SChristopher Ferris   independent_comalloc allocates, all at once, a set of n_elements
1130*2680e0c0SChristopher Ferris   chunks with sizes indicated in the "sizes" array.    It returns
1131*2680e0c0SChristopher Ferris   an array of pointers to these elements, each of which can be
1132*2680e0c0SChristopher Ferris   independently freed, realloc'ed etc. The elements are guaranteed to
1133*2680e0c0SChristopher Ferris   be adjacently allocated (this is not guaranteed to occur with
1134*2680e0c0SChristopher Ferris   multiple callocs or mallocs), which may also improve cache locality
1135*2680e0c0SChristopher Ferris   in some applications.
1136*2680e0c0SChristopher Ferris 
1137*2680e0c0SChristopher Ferris   The "chunks" argument is optional (i.e., may be null). If it is null
1138*2680e0c0SChristopher Ferris   the returned array is itself dynamically allocated and should also
1139*2680e0c0SChristopher Ferris   be freed when it is no longer needed. Otherwise, the chunks array
1140*2680e0c0SChristopher Ferris   must be of at least n_elements in length. It is filled in with the
1141*2680e0c0SChristopher Ferris   pointers to the chunks.
1142*2680e0c0SChristopher Ferris 
1143*2680e0c0SChristopher Ferris   In either case, independent_comalloc returns this pointer array, or
1144*2680e0c0SChristopher Ferris   null if the allocation failed.  If n_elements is zero and chunks is
1145*2680e0c0SChristopher Ferris   null, it returns a chunk representing an array with zero elements
1146*2680e0c0SChristopher Ferris   (which should be freed if not wanted).
1147*2680e0c0SChristopher Ferris 
1148*2680e0c0SChristopher Ferris   Each element must be freed when it is no longer needed. This can be
1149*2680e0c0SChristopher Ferris   done all at once using bulk_free.
1150*2680e0c0SChristopher Ferris 
1151*2680e0c0SChristopher Ferris   independent_comallac differs from independent_calloc in that each
1152*2680e0c0SChristopher Ferris   element may have a different size, and also that it does not
1153*2680e0c0SChristopher Ferris   automatically clear elements.
1154*2680e0c0SChristopher Ferris 
1155*2680e0c0SChristopher Ferris   independent_comalloc can be used to speed up allocation in cases
1156*2680e0c0SChristopher Ferris   where several structs or objects must always be allocated at the
1157*2680e0c0SChristopher Ferris   same time.  For example:
1158*2680e0c0SChristopher Ferris 
1159*2680e0c0SChristopher Ferris   struct Head { ... }
1160*2680e0c0SChristopher Ferris   struct Foot { ... }
1161*2680e0c0SChristopher Ferris 
1162*2680e0c0SChristopher Ferris   void send_message(char* msg) {
1163*2680e0c0SChristopher Ferris     int msglen = strlen(msg);
1164*2680e0c0SChristopher Ferris     size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
1165*2680e0c0SChristopher Ferris     void* chunks[3];
1166*2680e0c0SChristopher Ferris     if (independent_comalloc(3, sizes, chunks) == 0)
1167*2680e0c0SChristopher Ferris       die();
1168*2680e0c0SChristopher Ferris     struct Head* head = (struct Head*)(chunks[0]);
1169*2680e0c0SChristopher Ferris     char*        body = (char*)(chunks[1]);
1170*2680e0c0SChristopher Ferris     struct Foot* foot = (struct Foot*)(chunks[2]);
1171*2680e0c0SChristopher Ferris     // ...
1172*2680e0c0SChristopher Ferris   }
1173*2680e0c0SChristopher Ferris 
1174*2680e0c0SChristopher Ferris   In general though, independent_comalloc is worth using only for
1175*2680e0c0SChristopher Ferris   larger values of n_elements. For small values, you probably won't
1176*2680e0c0SChristopher Ferris   detect enough difference from series of malloc calls to bother.
1177*2680e0c0SChristopher Ferris 
1178*2680e0c0SChristopher Ferris   Overuse of independent_comalloc can increase overall memory usage,
1179*2680e0c0SChristopher Ferris   since it cannot reuse existing noncontiguous small chunks that
1180*2680e0c0SChristopher Ferris   might be available for some of the elements.
1181*2680e0c0SChristopher Ferris */
1182*2680e0c0SChristopher Ferris DLMALLOC_EXPORT void** dlindependent_comalloc(size_t, size_t*, void**);
1183*2680e0c0SChristopher Ferris 
1184*2680e0c0SChristopher Ferris /*
1185*2680e0c0SChristopher Ferris   bulk_free(void* array[], size_t n_elements)
1186*2680e0c0SChristopher Ferris   Frees and clears (sets to null) each non-null pointer in the given
1187*2680e0c0SChristopher Ferris   array.  This is likely to be faster than freeing them one-by-one.
1188*2680e0c0SChristopher Ferris   If footers are used, pointers that have been allocated in different
1189*2680e0c0SChristopher Ferris   mspaces are not freed or cleared, and the count of all such pointers
1190*2680e0c0SChristopher Ferris   is returned.  For large arrays of pointers with poor locality, it
1191*2680e0c0SChristopher Ferris   may be worthwhile to sort this array before calling bulk_free.
1192*2680e0c0SChristopher Ferris */
1193*2680e0c0SChristopher Ferris DLMALLOC_EXPORT size_t  dlbulk_free(void**, size_t n_elements);
1194*2680e0c0SChristopher Ferris 
1195*2680e0c0SChristopher Ferris /*
1196*2680e0c0SChristopher Ferris   pvalloc(size_t n);
1197*2680e0c0SChristopher Ferris   Equivalent to valloc(minimum-page-that-holds(n)), that is,
1198*2680e0c0SChristopher Ferris   round up n to nearest pagesize.
1199*2680e0c0SChristopher Ferris  */
1200*2680e0c0SChristopher Ferris DLMALLOC_EXPORT void*  dlpvalloc(size_t);
1201*2680e0c0SChristopher Ferris 
1202*2680e0c0SChristopher Ferris /*
1203*2680e0c0SChristopher Ferris   malloc_trim(size_t pad);
1204*2680e0c0SChristopher Ferris 
1205*2680e0c0SChristopher Ferris   If possible, gives memory back to the system (via negative arguments
1206*2680e0c0SChristopher Ferris   to sbrk) if there is unused memory at the `high' end of the malloc
1207*2680e0c0SChristopher Ferris   pool or in unused MMAP segments. You can call this after freeing
1208*2680e0c0SChristopher Ferris   large blocks of memory to potentially reduce the system-level memory
1209*2680e0c0SChristopher Ferris   requirements of a program. However, it cannot guarantee to reduce
1210*2680e0c0SChristopher Ferris   memory. Under some allocation patterns, some large free blocks of
1211*2680e0c0SChristopher Ferris   memory will be locked between two used chunks, so they cannot be
1212*2680e0c0SChristopher Ferris   given back to the system.
1213*2680e0c0SChristopher Ferris 
1214*2680e0c0SChristopher Ferris   The `pad' argument to malloc_trim represents the amount of free
1215*2680e0c0SChristopher Ferris   trailing space to leave untrimmed. If this argument is zero, only
1216*2680e0c0SChristopher Ferris   the minimum amount of memory to maintain internal data structures
1217*2680e0c0SChristopher Ferris   will be left. Non-zero arguments can be supplied to maintain enough
1218*2680e0c0SChristopher Ferris   trailing space to service future expected allocations without having
1219*2680e0c0SChristopher Ferris   to re-obtain memory from the system.
1220*2680e0c0SChristopher Ferris 
1221*2680e0c0SChristopher Ferris   Malloc_trim returns 1 if it actually released any memory, else 0.
1222*2680e0c0SChristopher Ferris */
1223*2680e0c0SChristopher Ferris DLMALLOC_EXPORT int  dlmalloc_trim(size_t);
1224*2680e0c0SChristopher Ferris 
1225*2680e0c0SChristopher Ferris /*
1226*2680e0c0SChristopher Ferris   malloc_stats();
1227*2680e0c0SChristopher Ferris   Prints on stderr the amount of space obtained from the system (both
1228*2680e0c0SChristopher Ferris   via sbrk and mmap), the maximum amount (which may be more than
1229*2680e0c0SChristopher Ferris   current if malloc_trim and/or munmap got called), and the current
1230*2680e0c0SChristopher Ferris   number of bytes allocated via malloc (or realloc, etc) but not yet
1231*2680e0c0SChristopher Ferris   freed. Note that this is the number of bytes allocated, not the
1232*2680e0c0SChristopher Ferris   number requested. It will be larger than the number requested
1233*2680e0c0SChristopher Ferris   because of alignment and bookkeeping overhead. Because it includes
1234*2680e0c0SChristopher Ferris   alignment wastage as being in use, this figure may be greater than
1235*2680e0c0SChristopher Ferris   zero even when no user-level chunks are allocated.
1236*2680e0c0SChristopher Ferris 
1237*2680e0c0SChristopher Ferris   The reported current and maximum system memory can be inaccurate if
1238*2680e0c0SChristopher Ferris   a program makes other calls to system memory allocation functions
1239*2680e0c0SChristopher Ferris   (normally sbrk) outside of malloc.
1240*2680e0c0SChristopher Ferris 
1241*2680e0c0SChristopher Ferris   malloc_stats prints only the most commonly interesting statistics.
1242*2680e0c0SChristopher Ferris   More information can be obtained by calling mallinfo.
1243*2680e0c0SChristopher Ferris */
1244*2680e0c0SChristopher Ferris DLMALLOC_EXPORT void  dlmalloc_stats(void);
1245*2680e0c0SChristopher Ferris 
1246*2680e0c0SChristopher Ferris /*
1247*2680e0c0SChristopher Ferris   malloc_usable_size(void* p);
1248*2680e0c0SChristopher Ferris 
1249*2680e0c0SChristopher Ferris   Returns the number of bytes you can actually use in
1250*2680e0c0SChristopher Ferris   an allocated chunk, which may be more than you requested (although
1251*2680e0c0SChristopher Ferris   often not) due to alignment and minimum size constraints.
1252*2680e0c0SChristopher Ferris   You can use this many bytes without worrying about
1253*2680e0c0SChristopher Ferris   overwriting other allocated objects. This is not a particularly great
1254*2680e0c0SChristopher Ferris   programming practice. malloc_usable_size can be more useful in
1255*2680e0c0SChristopher Ferris   debugging and assertions, for example:
1256*2680e0c0SChristopher Ferris 
1257*2680e0c0SChristopher Ferris   p = malloc(n);
1258*2680e0c0SChristopher Ferris   assert(malloc_usable_size(p) >= 256);
1259*2680e0c0SChristopher Ferris */
1260*2680e0c0SChristopher Ferris /* BEGIN android-changed: added const */
1261*2680e0c0SChristopher Ferris size_t dlmalloc_usable_size(const void*);
1262*2680e0c0SChristopher Ferris /* END android-change */
1263*2680e0c0SChristopher Ferris 
1264*2680e0c0SChristopher Ferris #endif /* ONLY_MSPACES */
1265*2680e0c0SChristopher Ferris 
1266*2680e0c0SChristopher Ferris #if MSPACES
1267*2680e0c0SChristopher Ferris 
1268*2680e0c0SChristopher Ferris /*
1269*2680e0c0SChristopher Ferris   mspace is an opaque type representing an independent
1270*2680e0c0SChristopher Ferris   region of space that supports mspace_malloc, etc.
1271*2680e0c0SChristopher Ferris */
1272*2680e0c0SChristopher Ferris typedef void* mspace;
1273*2680e0c0SChristopher Ferris 
1274*2680e0c0SChristopher Ferris /*
1275*2680e0c0SChristopher Ferris   create_mspace creates and returns a new independent space with the
1276*2680e0c0SChristopher Ferris   given initial capacity, or, if 0, the default granularity size.  It
1277*2680e0c0SChristopher Ferris   returns null if there is no system memory available to create the
1278*2680e0c0SChristopher Ferris   space.  If argument locked is non-zero, the space uses a separate
1279*2680e0c0SChristopher Ferris   lock to control access. The capacity of the space will grow
1280*2680e0c0SChristopher Ferris   dynamically as needed to service mspace_malloc requests.  You can
1281*2680e0c0SChristopher Ferris   control the sizes of incremental increases of this space by
1282*2680e0c0SChristopher Ferris   compiling with a different DEFAULT_GRANULARITY or dynamically
1283*2680e0c0SChristopher Ferris   setting with mallopt(M_GRANULARITY, value).
1284*2680e0c0SChristopher Ferris */
1285*2680e0c0SChristopher Ferris DLMALLOC_EXPORT mspace create_mspace(size_t capacity, int locked);
1286*2680e0c0SChristopher Ferris 
1287*2680e0c0SChristopher Ferris /*
1288*2680e0c0SChristopher Ferris   destroy_mspace destroys the given space, and attempts to return all
1289*2680e0c0SChristopher Ferris   of its memory back to the system, returning the total number of
1290*2680e0c0SChristopher Ferris   bytes freed. After destruction, the results of access to all memory
1291*2680e0c0SChristopher Ferris   used by the space become undefined.
1292*2680e0c0SChristopher Ferris */
1293*2680e0c0SChristopher Ferris DLMALLOC_EXPORT size_t destroy_mspace(mspace msp);
1294*2680e0c0SChristopher Ferris 
1295*2680e0c0SChristopher Ferris /*
1296*2680e0c0SChristopher Ferris   create_mspace_with_base uses the memory supplied as the initial base
1297*2680e0c0SChristopher Ferris   of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1298*2680e0c0SChristopher Ferris   space is used for bookkeeping, so the capacity must be at least this
1299*2680e0c0SChristopher Ferris   large. (Otherwise 0 is returned.) When this initial space is
1300*2680e0c0SChristopher Ferris   exhausted, additional memory will be obtained from the system.
1301*2680e0c0SChristopher Ferris   Destroying this space will deallocate all additionally allocated
1302*2680e0c0SChristopher Ferris   space (if possible) but not the initial base.
1303*2680e0c0SChristopher Ferris */
1304*2680e0c0SChristopher Ferris DLMALLOC_EXPORT mspace create_mspace_with_base(void* base, size_t capacity, int locked);
1305*2680e0c0SChristopher Ferris 
1306*2680e0c0SChristopher Ferris /*
1307*2680e0c0SChristopher Ferris   mspace_track_large_chunks controls whether requests for large chunks
1308*2680e0c0SChristopher Ferris   are allocated in their own untracked mmapped regions, separate from
1309*2680e0c0SChristopher Ferris   others in this mspace. By default large chunks are not tracked,
1310*2680e0c0SChristopher Ferris   which reduces fragmentation. However, such chunks are not
1311*2680e0c0SChristopher Ferris   necessarily released to the system upon destroy_mspace.  Enabling
1312*2680e0c0SChristopher Ferris   tracking by setting to true may increase fragmentation, but avoids
1313*2680e0c0SChristopher Ferris   leakage when relying on destroy_mspace to release all memory
1314*2680e0c0SChristopher Ferris   allocated using this space.  The function returns the previous
1315*2680e0c0SChristopher Ferris   setting.
1316*2680e0c0SChristopher Ferris */
1317*2680e0c0SChristopher Ferris DLMALLOC_EXPORT int mspace_track_large_chunks(mspace msp, int enable);
1318*2680e0c0SChristopher Ferris 
1319*2680e0c0SChristopher Ferris 
1320*2680e0c0SChristopher Ferris /*
1321*2680e0c0SChristopher Ferris   mspace_malloc behaves as malloc, but operates within
1322*2680e0c0SChristopher Ferris   the given space.
1323*2680e0c0SChristopher Ferris */
1324*2680e0c0SChristopher Ferris DLMALLOC_EXPORT void* mspace_malloc(mspace msp, size_t bytes);
1325*2680e0c0SChristopher Ferris 
1326*2680e0c0SChristopher Ferris /*
1327*2680e0c0SChristopher Ferris   mspace_free behaves as free, but operates within
1328*2680e0c0SChristopher Ferris   the given space.
1329*2680e0c0SChristopher Ferris 
1330*2680e0c0SChristopher Ferris   If compiled with FOOTERS==1, mspace_free is not actually needed.
1331*2680e0c0SChristopher Ferris   free may be called instead of mspace_free because freed chunks from
1332*2680e0c0SChristopher Ferris   any space are handled by their originating spaces.
1333*2680e0c0SChristopher Ferris */
1334*2680e0c0SChristopher Ferris DLMALLOC_EXPORT void mspace_free(mspace msp, void* mem);
1335*2680e0c0SChristopher Ferris 
1336*2680e0c0SChristopher Ferris /*
1337*2680e0c0SChristopher Ferris   mspace_realloc behaves as realloc, but operates within
1338*2680e0c0SChristopher Ferris   the given space.
1339*2680e0c0SChristopher Ferris 
1340*2680e0c0SChristopher Ferris   If compiled with FOOTERS==1, mspace_realloc is not actually
1341*2680e0c0SChristopher Ferris   needed.  realloc may be called instead of mspace_realloc because
1342*2680e0c0SChristopher Ferris   realloced chunks from any space are handled by their originating
1343*2680e0c0SChristopher Ferris   spaces.
1344*2680e0c0SChristopher Ferris */
1345*2680e0c0SChristopher Ferris DLMALLOC_EXPORT void* mspace_realloc(mspace msp, void* mem, size_t newsize);
1346*2680e0c0SChristopher Ferris 
1347*2680e0c0SChristopher Ferris /*
1348*2680e0c0SChristopher Ferris   mspace_calloc behaves as calloc, but operates within
1349*2680e0c0SChristopher Ferris   the given space.
1350*2680e0c0SChristopher Ferris */
1351*2680e0c0SChristopher Ferris DLMALLOC_EXPORT void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
1352*2680e0c0SChristopher Ferris 
1353*2680e0c0SChristopher Ferris /*
1354*2680e0c0SChristopher Ferris   mspace_memalign behaves as memalign, but operates within
1355*2680e0c0SChristopher Ferris   the given space.
1356*2680e0c0SChristopher Ferris */
1357*2680e0c0SChristopher Ferris DLMALLOC_EXPORT void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
1358*2680e0c0SChristopher Ferris 
1359*2680e0c0SChristopher Ferris /*
1360*2680e0c0SChristopher Ferris   mspace_independent_calloc behaves as independent_calloc, but
1361*2680e0c0SChristopher Ferris   operates within the given space.
1362*2680e0c0SChristopher Ferris */
1363*2680e0c0SChristopher Ferris DLMALLOC_EXPORT void** mspace_independent_calloc(mspace msp, size_t n_elements,
1364*2680e0c0SChristopher Ferris                                  size_t elem_size, void* chunks[]);
1365*2680e0c0SChristopher Ferris 
1366*2680e0c0SChristopher Ferris /*
1367*2680e0c0SChristopher Ferris   mspace_independent_comalloc behaves as independent_comalloc, but
1368*2680e0c0SChristopher Ferris   operates within the given space.
1369*2680e0c0SChristopher Ferris */
1370*2680e0c0SChristopher Ferris DLMALLOC_EXPORT void** mspace_independent_comalloc(mspace msp, size_t n_elements,
1371*2680e0c0SChristopher Ferris                                    size_t sizes[], void* chunks[]);
1372*2680e0c0SChristopher Ferris 
1373*2680e0c0SChristopher Ferris /*
1374*2680e0c0SChristopher Ferris   mspace_footprint() returns the number of bytes obtained from the
1375*2680e0c0SChristopher Ferris   system for this space.
1376*2680e0c0SChristopher Ferris */
1377*2680e0c0SChristopher Ferris DLMALLOC_EXPORT size_t mspace_footprint(mspace msp);
1378*2680e0c0SChristopher Ferris 
1379*2680e0c0SChristopher Ferris /*
1380*2680e0c0SChristopher Ferris   mspace_max_footprint() returns the peak number of bytes obtained from the
1381*2680e0c0SChristopher Ferris   system for this space.
1382*2680e0c0SChristopher Ferris */
1383*2680e0c0SChristopher Ferris DLMALLOC_EXPORT size_t mspace_max_footprint(mspace msp);
1384*2680e0c0SChristopher Ferris 
1385*2680e0c0SChristopher Ferris 
1386*2680e0c0SChristopher Ferris #if !NO_MALLINFO
1387*2680e0c0SChristopher Ferris /*
1388*2680e0c0SChristopher Ferris   mspace_mallinfo behaves as mallinfo, but reports properties of
1389*2680e0c0SChristopher Ferris   the given space.
1390*2680e0c0SChristopher Ferris */
1391*2680e0c0SChristopher Ferris DLMALLOC_EXPORT struct mallinfo mspace_mallinfo(mspace msp);
1392*2680e0c0SChristopher Ferris #endif /* NO_MALLINFO */
1393*2680e0c0SChristopher Ferris 
1394*2680e0c0SChristopher Ferris /*
1395*2680e0c0SChristopher Ferris   malloc_usable_size(void* p) behaves the same as malloc_usable_size;
1396*2680e0c0SChristopher Ferris */
1397*2680e0c0SChristopher Ferris DLMALLOC_EXPORT size_t mspace_usable_size(const void* mem);
1398*2680e0c0SChristopher Ferris 
1399*2680e0c0SChristopher Ferris /*
1400*2680e0c0SChristopher Ferris   mspace_malloc_stats behaves as malloc_stats, but reports
1401*2680e0c0SChristopher Ferris   properties of the given space.
1402*2680e0c0SChristopher Ferris */
1403*2680e0c0SChristopher Ferris DLMALLOC_EXPORT void mspace_malloc_stats(mspace msp);
1404*2680e0c0SChristopher Ferris 
1405*2680e0c0SChristopher Ferris /*
1406*2680e0c0SChristopher Ferris   mspace_trim behaves as malloc_trim, but
1407*2680e0c0SChristopher Ferris   operates within the given space.
1408*2680e0c0SChristopher Ferris */
1409*2680e0c0SChristopher Ferris DLMALLOC_EXPORT int mspace_trim(mspace msp, size_t pad);
1410*2680e0c0SChristopher Ferris 
1411*2680e0c0SChristopher Ferris /*
1412*2680e0c0SChristopher Ferris   An alias for mallopt.
1413*2680e0c0SChristopher Ferris */
1414*2680e0c0SChristopher Ferris DLMALLOC_EXPORT int mspace_mallopt(int, int);
1415*2680e0c0SChristopher Ferris 
1416*2680e0c0SChristopher Ferris #endif /* MSPACES */
1417*2680e0c0SChristopher Ferris 
1418*2680e0c0SChristopher Ferris #ifdef __cplusplus
1419*2680e0c0SChristopher Ferris }  /* end of extern "C" */
1420*2680e0c0SChristopher Ferris #endif /* __cplusplus */
1421*2680e0c0SChristopher Ferris 
1422*2680e0c0SChristopher Ferris /*
1423*2680e0c0SChristopher Ferris   ========================================================================
1424*2680e0c0SChristopher Ferris   To make a fully customizable malloc.h header file, cut everything
1425*2680e0c0SChristopher Ferris   above this line, put into file malloc.h, edit to suit, and #include it
1426*2680e0c0SChristopher Ferris   on the next line, as well as in programs that use this malloc.
1427*2680e0c0SChristopher Ferris   ========================================================================
1428*2680e0c0SChristopher Ferris */
1429*2680e0c0SChristopher Ferris 
1430*2680e0c0SChristopher Ferris /* #include "malloc.h" */
1431*2680e0c0SChristopher Ferris 
1432*2680e0c0SChristopher Ferris /*------------------------------ internal #includes ---------------------- */
1433*2680e0c0SChristopher Ferris 
1434*2680e0c0SChristopher Ferris #ifdef _MSC_VER
1435*2680e0c0SChristopher Ferris #pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1436*2680e0c0SChristopher Ferris #endif /* _MSC_VER */
1437*2680e0c0SChristopher Ferris #if !NO_MALLOC_STATS
1438*2680e0c0SChristopher Ferris #include <stdio.h>       /* for printing in malloc_stats */
1439*2680e0c0SChristopher Ferris #endif /* NO_MALLOC_STATS */
1440*2680e0c0SChristopher Ferris #ifndef LACKS_ERRNO_H
1441*2680e0c0SChristopher Ferris #include <errno.h>       /* for MALLOC_FAILURE_ACTION */
1442*2680e0c0SChristopher Ferris #endif /* LACKS_ERRNO_H */
1443*2680e0c0SChristopher Ferris #ifdef DEBUG
1444*2680e0c0SChristopher Ferris #if ABORT_ON_ASSERT_FAILURE
1445*2680e0c0SChristopher Ferris #undef assert
1446*2680e0c0SChristopher Ferris #define assert(x) if(!(x)) ABORT
1447*2680e0c0SChristopher Ferris #else /* ABORT_ON_ASSERT_FAILURE */
1448*2680e0c0SChristopher Ferris #include <assert.h>
1449*2680e0c0SChristopher Ferris #endif /* ABORT_ON_ASSERT_FAILURE */
1450*2680e0c0SChristopher Ferris #else  /* DEBUG */
1451*2680e0c0SChristopher Ferris #ifndef assert
1452*2680e0c0SChristopher Ferris #define assert(x)
1453*2680e0c0SChristopher Ferris #endif
1454*2680e0c0SChristopher Ferris #define DEBUG 0
1455*2680e0c0SChristopher Ferris #endif /* DEBUG */
1456*2680e0c0SChristopher Ferris #if !defined(WIN32) && !defined(LACKS_TIME_H)
1457*2680e0c0SChristopher Ferris #include <time.h>        /* for magic initialization */
1458*2680e0c0SChristopher Ferris #endif /* WIN32 */
1459*2680e0c0SChristopher Ferris #ifndef LACKS_STDLIB_H
1460*2680e0c0SChristopher Ferris #include <stdlib.h>      /* for abort() */
1461*2680e0c0SChristopher Ferris #endif /* LACKS_STDLIB_H */
1462*2680e0c0SChristopher Ferris #ifndef LACKS_STRING_H
1463*2680e0c0SChristopher Ferris #include <string.h>      /* for memset etc */
1464*2680e0c0SChristopher Ferris #endif  /* LACKS_STRING_H */
1465*2680e0c0SChristopher Ferris #if USE_BUILTIN_FFS
1466*2680e0c0SChristopher Ferris #ifndef LACKS_STRINGS_H
1467*2680e0c0SChristopher Ferris #include <strings.h>     /* for ffs */
1468*2680e0c0SChristopher Ferris #endif /* LACKS_STRINGS_H */
1469*2680e0c0SChristopher Ferris #endif /* USE_BUILTIN_FFS */
1470*2680e0c0SChristopher Ferris #if HAVE_MMAP
1471*2680e0c0SChristopher Ferris #ifndef LACKS_SYS_MMAN_H
1472*2680e0c0SChristopher Ferris /* On some versions of linux, mremap decl in mman.h needs __USE_GNU set */
1473*2680e0c0SChristopher Ferris #if (defined(linux) && !defined(__USE_GNU))
1474*2680e0c0SChristopher Ferris #define __USE_GNU 1
1475*2680e0c0SChristopher Ferris #include <sys/mman.h>    /* for mmap */
1476*2680e0c0SChristopher Ferris #undef __USE_GNU
1477*2680e0c0SChristopher Ferris #else
1478*2680e0c0SChristopher Ferris #include <sys/mman.h>    /* for mmap */
1479*2680e0c0SChristopher Ferris #endif /* linux */
1480*2680e0c0SChristopher Ferris #endif /* LACKS_SYS_MMAN_H */
1481*2680e0c0SChristopher Ferris #ifndef LACKS_FCNTL_H
1482*2680e0c0SChristopher Ferris #include <fcntl.h>
1483*2680e0c0SChristopher Ferris #endif /* LACKS_FCNTL_H */
1484*2680e0c0SChristopher Ferris #endif /* HAVE_MMAP */
1485*2680e0c0SChristopher Ferris #ifndef LACKS_UNISTD_H
1486*2680e0c0SChristopher Ferris #include <unistd.h>     /* for sbrk, sysconf */
1487*2680e0c0SChristopher Ferris #else /* LACKS_UNISTD_H */
1488*2680e0c0SChristopher Ferris #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1489*2680e0c0SChristopher Ferris extern void*     sbrk(ptrdiff_t);
1490*2680e0c0SChristopher Ferris #endif /* FreeBSD etc */
1491*2680e0c0SChristopher Ferris #endif /* LACKS_UNISTD_H */
1492*2680e0c0SChristopher Ferris 
1493*2680e0c0SChristopher Ferris /* Declarations for locking */
1494*2680e0c0SChristopher Ferris #if USE_LOCKS
1495*2680e0c0SChristopher Ferris #ifndef WIN32
1496*2680e0c0SChristopher Ferris #if defined (__SVR4) && defined (__sun)  /* solaris */
1497*2680e0c0SChristopher Ferris #include <thread.h>
1498*2680e0c0SChristopher Ferris #elif !defined(LACKS_SCHED_H)
1499*2680e0c0SChristopher Ferris #include <sched.h>
1500*2680e0c0SChristopher Ferris #endif /* solaris or LACKS_SCHED_H */
1501*2680e0c0SChristopher Ferris #if (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0) || !USE_SPIN_LOCKS
1502*2680e0c0SChristopher Ferris #include <pthread.h>
1503*2680e0c0SChristopher Ferris #endif /* USE_RECURSIVE_LOCKS ... */
1504*2680e0c0SChristopher Ferris #elif defined(_MSC_VER)
1505*2680e0c0SChristopher Ferris #ifndef _M_AMD64
1506*2680e0c0SChristopher Ferris /* These are already defined on AMD64 builds */
1507*2680e0c0SChristopher Ferris #ifdef __cplusplus
1508*2680e0c0SChristopher Ferris extern "C" {
1509*2680e0c0SChristopher Ferris #endif /* __cplusplus */
1510*2680e0c0SChristopher Ferris LONG __cdecl _InterlockedCompareExchange(LONG volatile *Dest, LONG Exchange, LONG Comp);
1511*2680e0c0SChristopher Ferris LONG __cdecl _InterlockedExchange(LONG volatile *Target, LONG Value);
1512*2680e0c0SChristopher Ferris #ifdef __cplusplus
1513*2680e0c0SChristopher Ferris }
1514*2680e0c0SChristopher Ferris #endif /* __cplusplus */
1515*2680e0c0SChristopher Ferris #endif /* _M_AMD64 */
1516*2680e0c0SChristopher Ferris #pragma intrinsic (_InterlockedCompareExchange)
1517*2680e0c0SChristopher Ferris #pragma intrinsic (_InterlockedExchange)
1518*2680e0c0SChristopher Ferris #define interlockedcompareexchange _InterlockedCompareExchange
1519*2680e0c0SChristopher Ferris #define interlockedexchange _InterlockedExchange
1520*2680e0c0SChristopher Ferris #elif defined(WIN32) && defined(__GNUC__)
1521*2680e0c0SChristopher Ferris #define interlockedcompareexchange(a, b, c) __sync_val_compare_and_swap(a, c, b)
1522*2680e0c0SChristopher Ferris #define interlockedexchange __sync_lock_test_and_set
1523*2680e0c0SChristopher Ferris #endif /* Win32 */
1524*2680e0c0SChristopher Ferris #else /* USE_LOCKS */
1525*2680e0c0SChristopher Ferris #endif /* USE_LOCKS */
1526*2680e0c0SChristopher Ferris 
1527*2680e0c0SChristopher Ferris #ifndef LOCK_AT_FORK
1528*2680e0c0SChristopher Ferris #define LOCK_AT_FORK 0
1529*2680e0c0SChristopher Ferris #endif
1530*2680e0c0SChristopher Ferris 
1531*2680e0c0SChristopher Ferris /* Declarations for bit scanning on win32 */
1532*2680e0c0SChristopher Ferris #if defined(_MSC_VER) && _MSC_VER>=1300
1533*2680e0c0SChristopher Ferris #ifndef BitScanForward /* Try to avoid pulling in WinNT.h */
1534*2680e0c0SChristopher Ferris #ifdef __cplusplus
1535*2680e0c0SChristopher Ferris extern "C" {
1536*2680e0c0SChristopher Ferris #endif /* __cplusplus */
1537*2680e0c0SChristopher Ferris unsigned char _BitScanForward(unsigned long *index, unsigned long mask);
1538*2680e0c0SChristopher Ferris unsigned char _BitScanReverse(unsigned long *index, unsigned long mask);
1539*2680e0c0SChristopher Ferris #ifdef __cplusplus
1540*2680e0c0SChristopher Ferris }
1541*2680e0c0SChristopher Ferris #endif /* __cplusplus */
1542*2680e0c0SChristopher Ferris 
1543*2680e0c0SChristopher Ferris #define BitScanForward _BitScanForward
1544*2680e0c0SChristopher Ferris #define BitScanReverse _BitScanReverse
1545*2680e0c0SChristopher Ferris #pragma intrinsic(_BitScanForward)
1546*2680e0c0SChristopher Ferris #pragma intrinsic(_BitScanReverse)
1547*2680e0c0SChristopher Ferris #endif /* BitScanForward */
1548*2680e0c0SChristopher Ferris #endif /* defined(_MSC_VER) && _MSC_VER>=1300 */
1549*2680e0c0SChristopher Ferris 
1550*2680e0c0SChristopher Ferris #ifndef WIN32
1551*2680e0c0SChristopher Ferris #ifndef malloc_getpagesize
1552*2680e0c0SChristopher Ferris #  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
1553*2680e0c0SChristopher Ferris #    ifndef _SC_PAGE_SIZE
1554*2680e0c0SChristopher Ferris #      define _SC_PAGE_SIZE _SC_PAGESIZE
1555*2680e0c0SChristopher Ferris #    endif
1556*2680e0c0SChristopher Ferris #  endif
1557*2680e0c0SChristopher Ferris #  ifdef _SC_PAGE_SIZE
1558*2680e0c0SChristopher Ferris #    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1559*2680e0c0SChristopher Ferris #  else
1560*2680e0c0SChristopher Ferris #    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1561*2680e0c0SChristopher Ferris        extern size_t getpagesize();
1562*2680e0c0SChristopher Ferris #      define malloc_getpagesize getpagesize()
1563*2680e0c0SChristopher Ferris #    else
1564*2680e0c0SChristopher Ferris #      ifdef WIN32 /* use supplied emulation of getpagesize */
1565*2680e0c0SChristopher Ferris #        define malloc_getpagesize getpagesize()
1566*2680e0c0SChristopher Ferris #      else
1567*2680e0c0SChristopher Ferris #        ifndef LACKS_SYS_PARAM_H
1568*2680e0c0SChristopher Ferris #          include <sys/param.h>
1569*2680e0c0SChristopher Ferris #        endif
1570*2680e0c0SChristopher Ferris #        ifdef EXEC_PAGESIZE
1571*2680e0c0SChristopher Ferris #          define malloc_getpagesize EXEC_PAGESIZE
1572*2680e0c0SChristopher Ferris #        else
1573*2680e0c0SChristopher Ferris #          ifdef NBPG
1574*2680e0c0SChristopher Ferris #            ifndef CLSIZE
1575*2680e0c0SChristopher Ferris #              define malloc_getpagesize NBPG
1576*2680e0c0SChristopher Ferris #            else
1577*2680e0c0SChristopher Ferris #              define malloc_getpagesize (NBPG * CLSIZE)
1578*2680e0c0SChristopher Ferris #            endif
1579*2680e0c0SChristopher Ferris #          else
1580*2680e0c0SChristopher Ferris #            ifdef NBPC
1581*2680e0c0SChristopher Ferris #              define malloc_getpagesize NBPC
1582*2680e0c0SChristopher Ferris #            else
1583*2680e0c0SChristopher Ferris #              ifdef PAGESIZE
1584*2680e0c0SChristopher Ferris #                define malloc_getpagesize PAGESIZE
1585*2680e0c0SChristopher Ferris #              else /* just guess */
1586*2680e0c0SChristopher Ferris #                define malloc_getpagesize ((size_t)4096U)
1587*2680e0c0SChristopher Ferris #              endif
1588*2680e0c0SChristopher Ferris #            endif
1589*2680e0c0SChristopher Ferris #          endif
1590*2680e0c0SChristopher Ferris #        endif
1591*2680e0c0SChristopher Ferris #      endif
1592*2680e0c0SChristopher Ferris #    endif
1593*2680e0c0SChristopher Ferris #  endif
1594*2680e0c0SChristopher Ferris #endif
1595*2680e0c0SChristopher Ferris #endif
1596*2680e0c0SChristopher Ferris 
1597*2680e0c0SChristopher Ferris /* ------------------- size_t and alignment properties -------------------- */
1598*2680e0c0SChristopher Ferris 
1599*2680e0c0SChristopher Ferris /* The byte and bit size of a size_t */
1600*2680e0c0SChristopher Ferris #define SIZE_T_SIZE         (sizeof(size_t))
1601*2680e0c0SChristopher Ferris #define SIZE_T_BITSIZE      (sizeof(size_t) << 3)
1602*2680e0c0SChristopher Ferris 
1603*2680e0c0SChristopher Ferris /* Some constants coerced to size_t */
1604*2680e0c0SChristopher Ferris /* Annoying but necessary to avoid errors on some platforms */
1605*2680e0c0SChristopher Ferris #define SIZE_T_ZERO         ((size_t)0)
1606*2680e0c0SChristopher Ferris #define SIZE_T_ONE          ((size_t)1)
1607*2680e0c0SChristopher Ferris #define SIZE_T_TWO          ((size_t)2)
1608*2680e0c0SChristopher Ferris #define SIZE_T_FOUR         ((size_t)4)
1609*2680e0c0SChristopher Ferris #define TWO_SIZE_T_SIZES    (SIZE_T_SIZE<<1)
1610*2680e0c0SChristopher Ferris #define FOUR_SIZE_T_SIZES   (SIZE_T_SIZE<<2)
1611*2680e0c0SChristopher Ferris #define SIX_SIZE_T_SIZES    (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1612*2680e0c0SChristopher Ferris #define HALF_MAX_SIZE_T     (MAX_SIZE_T / 2U)
1613*2680e0c0SChristopher Ferris 
1614*2680e0c0SChristopher Ferris /* The bit mask value corresponding to MALLOC_ALIGNMENT */
1615*2680e0c0SChristopher Ferris #define CHUNK_ALIGN_MASK    (MALLOC_ALIGNMENT - SIZE_T_ONE)
1616*2680e0c0SChristopher Ferris 
1617*2680e0c0SChristopher Ferris /* True if address a has acceptable alignment */
1618*2680e0c0SChristopher Ferris #define is_aligned(A)       (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1619*2680e0c0SChristopher Ferris 
1620*2680e0c0SChristopher Ferris /* the number of bytes to offset an address to align it */
1621*2680e0c0SChristopher Ferris #define align_offset(A)\
1622*2680e0c0SChristopher Ferris  ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1623*2680e0c0SChristopher Ferris   ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1624*2680e0c0SChristopher Ferris 
1625*2680e0c0SChristopher Ferris /* -------------------------- MMAP preliminaries ------------------------- */
1626*2680e0c0SChristopher Ferris 
1627*2680e0c0SChristopher Ferris /*
1628*2680e0c0SChristopher Ferris    If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1629*2680e0c0SChristopher Ferris    checks to fail so compiler optimizer can delete code rather than
1630*2680e0c0SChristopher Ferris    using so many "#if"s.
1631*2680e0c0SChristopher Ferris */
1632*2680e0c0SChristopher Ferris 
1633*2680e0c0SChristopher Ferris 
1634*2680e0c0SChristopher Ferris /* MORECORE and MMAP must return MFAIL on failure */
1635*2680e0c0SChristopher Ferris #define MFAIL                ((void*)(MAX_SIZE_T))
1636*2680e0c0SChristopher Ferris #define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
1637*2680e0c0SChristopher Ferris 
1638*2680e0c0SChristopher Ferris #if HAVE_MMAP
1639*2680e0c0SChristopher Ferris 
1640*2680e0c0SChristopher Ferris #ifndef WIN32
1641*2680e0c0SChristopher Ferris #define MUNMAP_DEFAULT(a, s)  munmap((a), (s))
1642*2680e0c0SChristopher Ferris #define MMAP_PROT            (PROT_READ|PROT_WRITE)
1643*2680e0c0SChristopher Ferris #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1644*2680e0c0SChristopher Ferris #define MAP_ANONYMOUS        MAP_ANON
1645*2680e0c0SChristopher Ferris #endif /* MAP_ANON */
1646*2680e0c0SChristopher Ferris #ifdef MAP_ANONYMOUS
1647*2680e0c0SChristopher Ferris #define MMAP_FLAGS           (MAP_PRIVATE|MAP_ANONYMOUS)
1648*2680e0c0SChristopher Ferris #define MMAP_DEFAULT(s)       mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1649*2680e0c0SChristopher Ferris #else /* MAP_ANONYMOUS */
1650*2680e0c0SChristopher Ferris /*
1651*2680e0c0SChristopher Ferris    Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1652*2680e0c0SChristopher Ferris    is unlikely to be needed, but is supplied just in case.
1653*2680e0c0SChristopher Ferris */
1654*2680e0c0SChristopher Ferris #define MMAP_FLAGS           (MAP_PRIVATE)
1655*2680e0c0SChristopher Ferris static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1656*2680e0c0SChristopher Ferris #define MMAP_DEFAULT(s) ((dev_zero_fd < 0) ? \
1657*2680e0c0SChristopher Ferris            (dev_zero_fd = open("/dev/zero", O_RDWR), \
1658*2680e0c0SChristopher Ferris             mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1659*2680e0c0SChristopher Ferris             mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1660*2680e0c0SChristopher Ferris #endif /* MAP_ANONYMOUS */
1661*2680e0c0SChristopher Ferris 
1662*2680e0c0SChristopher Ferris #define DIRECT_MMAP_DEFAULT(s) MMAP_DEFAULT(s)
1663*2680e0c0SChristopher Ferris 
1664*2680e0c0SChristopher Ferris #else /* WIN32 */
1665*2680e0c0SChristopher Ferris 
1666*2680e0c0SChristopher Ferris /* Win32 MMAP via VirtualAlloc */
win32mmap(size_t size)1667*2680e0c0SChristopher Ferris static FORCEINLINE void* win32mmap(size_t size) {
1668*2680e0c0SChristopher Ferris   void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
1669*2680e0c0SChristopher Ferris   return (ptr != 0)? ptr: MFAIL;
1670*2680e0c0SChristopher Ferris }
1671*2680e0c0SChristopher Ferris 
1672*2680e0c0SChristopher Ferris /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
win32direct_mmap(size_t size)1673*2680e0c0SChristopher Ferris static FORCEINLINE void* win32direct_mmap(size_t size) {
1674*2680e0c0SChristopher Ferris   void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
1675*2680e0c0SChristopher Ferris                            PAGE_READWRITE);
1676*2680e0c0SChristopher Ferris   return (ptr != 0)? ptr: MFAIL;
1677*2680e0c0SChristopher Ferris }
1678*2680e0c0SChristopher Ferris 
1679*2680e0c0SChristopher Ferris /* This function supports releasing coalesed segments */
win32munmap(void * ptr,size_t size)1680*2680e0c0SChristopher Ferris static FORCEINLINE int win32munmap(void* ptr, size_t size) {
1681*2680e0c0SChristopher Ferris   MEMORY_BASIC_INFORMATION minfo;
1682*2680e0c0SChristopher Ferris   char* cptr = (char*)ptr;
1683*2680e0c0SChristopher Ferris   while (size) {
1684*2680e0c0SChristopher Ferris     if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
1685*2680e0c0SChristopher Ferris       return -1;
1686*2680e0c0SChristopher Ferris     if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
1687*2680e0c0SChristopher Ferris         minfo.State != MEM_COMMIT || minfo.RegionSize > size)
1688*2680e0c0SChristopher Ferris       return -1;
1689*2680e0c0SChristopher Ferris     if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
1690*2680e0c0SChristopher Ferris       return -1;
1691*2680e0c0SChristopher Ferris     cptr += minfo.RegionSize;
1692*2680e0c0SChristopher Ferris     size -= minfo.RegionSize;
1693*2680e0c0SChristopher Ferris   }
1694*2680e0c0SChristopher Ferris   return 0;
1695*2680e0c0SChristopher Ferris }
1696*2680e0c0SChristopher Ferris 
1697*2680e0c0SChristopher Ferris #define MMAP_DEFAULT(s)             win32mmap(s)
1698*2680e0c0SChristopher Ferris #define MUNMAP_DEFAULT(a, s)        win32munmap((a), (s))
1699*2680e0c0SChristopher Ferris #define DIRECT_MMAP_DEFAULT(s)      win32direct_mmap(s)
1700*2680e0c0SChristopher Ferris #endif /* WIN32 */
1701*2680e0c0SChristopher Ferris #endif /* HAVE_MMAP */
1702*2680e0c0SChristopher Ferris 
1703*2680e0c0SChristopher Ferris #if HAVE_MREMAP
1704*2680e0c0SChristopher Ferris #ifndef WIN32
1705*2680e0c0SChristopher Ferris #define MREMAP_DEFAULT(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1706*2680e0c0SChristopher Ferris #endif /* WIN32 */
1707*2680e0c0SChristopher Ferris #endif /* HAVE_MREMAP */
1708*2680e0c0SChristopher Ferris 
1709*2680e0c0SChristopher Ferris /**
1710*2680e0c0SChristopher Ferris  * Define CALL_MORECORE
1711*2680e0c0SChristopher Ferris  */
1712*2680e0c0SChristopher Ferris #if HAVE_MORECORE
1713*2680e0c0SChristopher Ferris     #ifdef MORECORE
1714*2680e0c0SChristopher Ferris         #define CALL_MORECORE(S)    MORECORE(S)
1715*2680e0c0SChristopher Ferris     #else  /* MORECORE */
1716*2680e0c0SChristopher Ferris         #define CALL_MORECORE(S)    MORECORE_DEFAULT(S)
1717*2680e0c0SChristopher Ferris     #endif /* MORECORE */
1718*2680e0c0SChristopher Ferris #else  /* HAVE_MORECORE */
1719*2680e0c0SChristopher Ferris     #define CALL_MORECORE(S)        MFAIL
1720*2680e0c0SChristopher Ferris #endif /* HAVE_MORECORE */
1721*2680e0c0SChristopher Ferris 
1722*2680e0c0SChristopher Ferris /**
1723*2680e0c0SChristopher Ferris  * Define CALL_MMAP/CALL_MUNMAP/CALL_DIRECT_MMAP
1724*2680e0c0SChristopher Ferris  */
1725*2680e0c0SChristopher Ferris #if HAVE_MMAP
1726*2680e0c0SChristopher Ferris     #define USE_MMAP_BIT            (SIZE_T_ONE)
1727*2680e0c0SChristopher Ferris 
1728*2680e0c0SChristopher Ferris     #ifdef MMAP
1729*2680e0c0SChristopher Ferris         #define CALL_MMAP(s)        MMAP(s)
1730*2680e0c0SChristopher Ferris     #else /* MMAP */
1731*2680e0c0SChristopher Ferris         #define CALL_MMAP(s)        MMAP_DEFAULT(s)
1732*2680e0c0SChristopher Ferris     #endif /* MMAP */
1733*2680e0c0SChristopher Ferris     #ifdef MUNMAP
1734*2680e0c0SChristopher Ferris         #define CALL_MUNMAP(a, s)   MUNMAP((a), (s))
1735*2680e0c0SChristopher Ferris     #else /* MUNMAP */
1736*2680e0c0SChristopher Ferris         #define CALL_MUNMAP(a, s)   MUNMAP_DEFAULT((a), (s))
1737*2680e0c0SChristopher Ferris     #endif /* MUNMAP */
1738*2680e0c0SChristopher Ferris     #ifdef DIRECT_MMAP
1739*2680e0c0SChristopher Ferris         #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1740*2680e0c0SChristopher Ferris     #else /* DIRECT_MMAP */
1741*2680e0c0SChristopher Ferris         #define CALL_DIRECT_MMAP(s) DIRECT_MMAP_DEFAULT(s)
1742*2680e0c0SChristopher Ferris     #endif /* DIRECT_MMAP */
1743*2680e0c0SChristopher Ferris #else  /* HAVE_MMAP */
1744*2680e0c0SChristopher Ferris     #define USE_MMAP_BIT            (SIZE_T_ZERO)
1745*2680e0c0SChristopher Ferris 
1746*2680e0c0SChristopher Ferris     #define MMAP(s)                 MFAIL
1747*2680e0c0SChristopher Ferris     #define MUNMAP(a, s)            (-1)
1748*2680e0c0SChristopher Ferris     #define DIRECT_MMAP(s)          MFAIL
1749*2680e0c0SChristopher Ferris     #define CALL_DIRECT_MMAP(s)     DIRECT_MMAP(s)
1750*2680e0c0SChristopher Ferris     #define CALL_MMAP(s)            MMAP(s)
1751*2680e0c0SChristopher Ferris     #define CALL_MUNMAP(a, s)       MUNMAP((a), (s))
1752*2680e0c0SChristopher Ferris #endif /* HAVE_MMAP */
1753*2680e0c0SChristopher Ferris 
1754*2680e0c0SChristopher Ferris /**
1755*2680e0c0SChristopher Ferris  * Define CALL_MREMAP
1756*2680e0c0SChristopher Ferris  */
1757*2680e0c0SChristopher Ferris #if HAVE_MMAP && HAVE_MREMAP
1758*2680e0c0SChristopher Ferris     #ifdef MREMAP
1759*2680e0c0SChristopher Ferris         #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP((addr), (osz), (nsz), (mv))
1760*2680e0c0SChristopher Ferris     #else /* MREMAP */
1761*2680e0c0SChristopher Ferris         #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP_DEFAULT((addr), (osz), (nsz), (mv))
1762*2680e0c0SChristopher Ferris     #endif /* MREMAP */
1763*2680e0c0SChristopher Ferris #else  /* HAVE_MMAP && HAVE_MREMAP */
1764*2680e0c0SChristopher Ferris     #define CALL_MREMAP(addr, osz, nsz, mv)     MFAIL
1765*2680e0c0SChristopher Ferris #endif /* HAVE_MMAP && HAVE_MREMAP */
1766*2680e0c0SChristopher Ferris 
1767*2680e0c0SChristopher Ferris /* mstate bit set if continguous morecore disabled or failed */
1768*2680e0c0SChristopher Ferris #define USE_NONCONTIGUOUS_BIT (4U)
1769*2680e0c0SChristopher Ferris 
1770*2680e0c0SChristopher Ferris /* segment bit set in create_mspace_with_base */
1771*2680e0c0SChristopher Ferris #define EXTERN_BIT            (8U)
1772*2680e0c0SChristopher Ferris 
1773*2680e0c0SChristopher Ferris 
1774*2680e0c0SChristopher Ferris /* --------------------------- Lock preliminaries ------------------------ */
1775*2680e0c0SChristopher Ferris 
1776*2680e0c0SChristopher Ferris /*
1777*2680e0c0SChristopher Ferris   When locks are defined, there is one global lock, plus
1778*2680e0c0SChristopher Ferris   one per-mspace lock.
1779*2680e0c0SChristopher Ferris 
1780*2680e0c0SChristopher Ferris   The global lock_ensures that mparams.magic and other unique
1781*2680e0c0SChristopher Ferris   mparams values are initialized only once. It also protects
1782*2680e0c0SChristopher Ferris   sequences of calls to MORECORE.  In many cases sys_alloc requires
1783*2680e0c0SChristopher Ferris   two calls, that should not be interleaved with calls by other
1784*2680e0c0SChristopher Ferris   threads.  This does not protect against direct calls to MORECORE
1785*2680e0c0SChristopher Ferris   by other threads not using this lock, so there is still code to
1786*2680e0c0SChristopher Ferris   cope the best we can on interference.
1787*2680e0c0SChristopher Ferris 
1788*2680e0c0SChristopher Ferris   Per-mspace locks surround calls to malloc, free, etc.
1789*2680e0c0SChristopher Ferris   By default, locks are simple non-reentrant mutexes.
1790*2680e0c0SChristopher Ferris 
1791*2680e0c0SChristopher Ferris   Because lock-protected regions generally have bounded times, it is
1792*2680e0c0SChristopher Ferris   OK to use the supplied simple spinlocks. Spinlocks are likely to
1793*2680e0c0SChristopher Ferris   improve performance for lightly contended applications, but worsen
1794*2680e0c0SChristopher Ferris   performance under heavy contention.
1795*2680e0c0SChristopher Ferris 
1796*2680e0c0SChristopher Ferris   If USE_LOCKS is > 1, the definitions of lock routines here are
1797*2680e0c0SChristopher Ferris   bypassed, in which case you will need to define the type MLOCK_T,
1798*2680e0c0SChristopher Ferris   and at least INITIAL_LOCK, DESTROY_LOCK, ACQUIRE_LOCK, RELEASE_LOCK
1799*2680e0c0SChristopher Ferris   and TRY_LOCK.  You must also declare a
1800*2680e0c0SChristopher Ferris     static MLOCK_T malloc_global_mutex = { initialization values };.
1801*2680e0c0SChristopher Ferris 
1802*2680e0c0SChristopher Ferris */
1803*2680e0c0SChristopher Ferris 
1804*2680e0c0SChristopher Ferris #if !USE_LOCKS
1805*2680e0c0SChristopher Ferris #define USE_LOCK_BIT               (0U)
1806*2680e0c0SChristopher Ferris #define INITIAL_LOCK(l)            (0)
1807*2680e0c0SChristopher Ferris #define DESTROY_LOCK(l)            (0)
1808*2680e0c0SChristopher Ferris #define ACQUIRE_MALLOC_GLOBAL_LOCK()
1809*2680e0c0SChristopher Ferris #define RELEASE_MALLOC_GLOBAL_LOCK()
1810*2680e0c0SChristopher Ferris 
1811*2680e0c0SChristopher Ferris #else
1812*2680e0c0SChristopher Ferris #if USE_LOCKS > 1
1813*2680e0c0SChristopher Ferris /* -----------------------  User-defined locks ------------------------ */
1814*2680e0c0SChristopher Ferris /* Define your own lock implementation here */
1815*2680e0c0SChristopher Ferris /* #define INITIAL_LOCK(lk)  ... */
1816*2680e0c0SChristopher Ferris /* #define DESTROY_LOCK(lk)  ... */
1817*2680e0c0SChristopher Ferris /* #define ACQUIRE_LOCK(lk)  ... */
1818*2680e0c0SChristopher Ferris /* #define RELEASE_LOCK(lk)  ... */
1819*2680e0c0SChristopher Ferris /* #define TRY_LOCK(lk) ... */
1820*2680e0c0SChristopher Ferris /* static MLOCK_T malloc_global_mutex = ... */
1821*2680e0c0SChristopher Ferris 
1822*2680e0c0SChristopher Ferris #elif USE_SPIN_LOCKS
1823*2680e0c0SChristopher Ferris 
1824*2680e0c0SChristopher Ferris /* First, define CAS_LOCK and CLEAR_LOCK on ints */
1825*2680e0c0SChristopher Ferris /* Note CAS_LOCK defined to return 0 on success */
1826*2680e0c0SChristopher Ferris 
1827*2680e0c0SChristopher Ferris #if defined(__GNUC__)&& (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1))
1828*2680e0c0SChristopher Ferris #define CAS_LOCK(sl)     __sync_lock_test_and_set(sl, 1)
1829*2680e0c0SChristopher Ferris #define CLEAR_LOCK(sl)   __sync_lock_release(sl)
1830*2680e0c0SChristopher Ferris 
1831*2680e0c0SChristopher Ferris #elif (defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__)))
1832*2680e0c0SChristopher Ferris /* Custom spin locks for older gcc on x86 */
x86_cas_lock(int * sl)1833*2680e0c0SChristopher Ferris static FORCEINLINE int x86_cas_lock(int *sl) {
1834*2680e0c0SChristopher Ferris   int ret;
1835*2680e0c0SChristopher Ferris   int val = 1;
1836*2680e0c0SChristopher Ferris   int cmp = 0;
1837*2680e0c0SChristopher Ferris   __asm__ __volatile__  ("lock; cmpxchgl %1, %2"
1838*2680e0c0SChristopher Ferris                          : "=a" (ret)
1839*2680e0c0SChristopher Ferris                          : "r" (val), "m" (*(sl)), "0"(cmp)
1840*2680e0c0SChristopher Ferris                          : "memory", "cc");
1841*2680e0c0SChristopher Ferris   return ret;
1842*2680e0c0SChristopher Ferris }
1843*2680e0c0SChristopher Ferris 
x86_clear_lock(int * sl)1844*2680e0c0SChristopher Ferris static FORCEINLINE void x86_clear_lock(int* sl) {
1845*2680e0c0SChristopher Ferris   assert(*sl != 0);
1846*2680e0c0SChristopher Ferris   int prev = 0;
1847*2680e0c0SChristopher Ferris   int ret;
1848*2680e0c0SChristopher Ferris   __asm__ __volatile__ ("lock; xchgl %0, %1"
1849*2680e0c0SChristopher Ferris                         : "=r" (ret)
1850*2680e0c0SChristopher Ferris                         : "m" (*(sl)), "0"(prev)
1851*2680e0c0SChristopher Ferris                         : "memory");
1852*2680e0c0SChristopher Ferris }
1853*2680e0c0SChristopher Ferris 
1854*2680e0c0SChristopher Ferris #define CAS_LOCK(sl)     x86_cas_lock(sl)
1855*2680e0c0SChristopher Ferris #define CLEAR_LOCK(sl)   x86_clear_lock(sl)
1856*2680e0c0SChristopher Ferris 
1857*2680e0c0SChristopher Ferris #else /* Win32 MSC */
1858*2680e0c0SChristopher Ferris #define CAS_LOCK(sl)     interlockedexchange(sl, (LONG)1)
1859*2680e0c0SChristopher Ferris #define CLEAR_LOCK(sl)   interlockedexchange (sl, (LONG)0)
1860*2680e0c0SChristopher Ferris 
1861*2680e0c0SChristopher Ferris #endif /* ... gcc spins locks ... */
1862*2680e0c0SChristopher Ferris 
1863*2680e0c0SChristopher Ferris /* How to yield for a spin lock */
1864*2680e0c0SChristopher Ferris #define SPINS_PER_YIELD       63
1865*2680e0c0SChristopher Ferris #if defined(_MSC_VER)
1866*2680e0c0SChristopher Ferris #define SLEEP_EX_DURATION     50 /* delay for yield/sleep */
1867*2680e0c0SChristopher Ferris #define SPIN_LOCK_YIELD  SleepEx(SLEEP_EX_DURATION, FALSE)
1868*2680e0c0SChristopher Ferris #elif defined (__SVR4) && defined (__sun) /* solaris */
1869*2680e0c0SChristopher Ferris #define SPIN_LOCK_YIELD   thr_yield();
1870*2680e0c0SChristopher Ferris #elif !defined(LACKS_SCHED_H)
1871*2680e0c0SChristopher Ferris #define SPIN_LOCK_YIELD   sched_yield();
1872*2680e0c0SChristopher Ferris #else
1873*2680e0c0SChristopher Ferris #define SPIN_LOCK_YIELD
1874*2680e0c0SChristopher Ferris #endif /* ... yield ... */
1875*2680e0c0SChristopher Ferris 
1876*2680e0c0SChristopher Ferris #if !defined(USE_RECURSIVE_LOCKS) || USE_RECURSIVE_LOCKS == 0
1877*2680e0c0SChristopher Ferris /* Plain spin locks use single word (embedded in malloc_states) */
spin_acquire_lock(int * sl)1878*2680e0c0SChristopher Ferris static int spin_acquire_lock(int *sl) {
1879*2680e0c0SChristopher Ferris   int spins = 0;
1880*2680e0c0SChristopher Ferris   while (*(volatile int *)sl != 0 || CAS_LOCK(sl)) {
1881*2680e0c0SChristopher Ferris     if ((++spins & SPINS_PER_YIELD) == 0) {
1882*2680e0c0SChristopher Ferris       SPIN_LOCK_YIELD;
1883*2680e0c0SChristopher Ferris     }
1884*2680e0c0SChristopher Ferris   }
1885*2680e0c0SChristopher Ferris   return 0;
1886*2680e0c0SChristopher Ferris }
1887*2680e0c0SChristopher Ferris 
1888*2680e0c0SChristopher Ferris #define MLOCK_T               int
1889*2680e0c0SChristopher Ferris #define TRY_LOCK(sl)          !CAS_LOCK(sl)
1890*2680e0c0SChristopher Ferris #define RELEASE_LOCK(sl)      CLEAR_LOCK(sl)
1891*2680e0c0SChristopher Ferris #define ACQUIRE_LOCK(sl)      (CAS_LOCK(sl)? spin_acquire_lock(sl) : 0)
1892*2680e0c0SChristopher Ferris #define INITIAL_LOCK(sl)      (*sl = 0)
1893*2680e0c0SChristopher Ferris #define DESTROY_LOCK(sl)      (0)
1894*2680e0c0SChristopher Ferris static MLOCK_T malloc_global_mutex = 0;
1895*2680e0c0SChristopher Ferris 
1896*2680e0c0SChristopher Ferris #else /* USE_RECURSIVE_LOCKS */
1897*2680e0c0SChristopher Ferris /* types for lock owners */
1898*2680e0c0SChristopher Ferris #ifdef WIN32
1899*2680e0c0SChristopher Ferris #define THREAD_ID_T           DWORD
1900*2680e0c0SChristopher Ferris #define CURRENT_THREAD        GetCurrentThreadId()
1901*2680e0c0SChristopher Ferris #define EQ_OWNER(X,Y)         ((X) == (Y))
1902*2680e0c0SChristopher Ferris #else
1903*2680e0c0SChristopher Ferris /*
1904*2680e0c0SChristopher Ferris   Note: the following assume that pthread_t is a type that can be
1905*2680e0c0SChristopher Ferris   initialized to (casted) zero. If this is not the case, you will need to
1906*2680e0c0SChristopher Ferris   somehow redefine these or not use spin locks.
1907*2680e0c0SChristopher Ferris */
1908*2680e0c0SChristopher Ferris #define THREAD_ID_T           pthread_t
1909*2680e0c0SChristopher Ferris #define CURRENT_THREAD        pthread_self()
1910*2680e0c0SChristopher Ferris #define EQ_OWNER(X,Y)         pthread_equal(X, Y)
1911*2680e0c0SChristopher Ferris #endif
1912*2680e0c0SChristopher Ferris 
1913*2680e0c0SChristopher Ferris struct malloc_recursive_lock {
1914*2680e0c0SChristopher Ferris   int sl;
1915*2680e0c0SChristopher Ferris   unsigned int c;
1916*2680e0c0SChristopher Ferris   THREAD_ID_T threadid;
1917*2680e0c0SChristopher Ferris };
1918*2680e0c0SChristopher Ferris 
1919*2680e0c0SChristopher Ferris #define MLOCK_T  struct malloc_recursive_lock
1920*2680e0c0SChristopher Ferris static MLOCK_T malloc_global_mutex = { 0, 0, (THREAD_ID_T)0};
1921*2680e0c0SChristopher Ferris 
recursive_release_lock(MLOCK_T * lk)1922*2680e0c0SChristopher Ferris static FORCEINLINE void recursive_release_lock(MLOCK_T *lk) {
1923*2680e0c0SChristopher Ferris   assert(lk->sl != 0);
1924*2680e0c0SChristopher Ferris   if (--lk->c == 0) {
1925*2680e0c0SChristopher Ferris     CLEAR_LOCK(&lk->sl);
1926*2680e0c0SChristopher Ferris   }
1927*2680e0c0SChristopher Ferris }
1928*2680e0c0SChristopher Ferris 
recursive_acquire_lock(MLOCK_T * lk)1929*2680e0c0SChristopher Ferris static FORCEINLINE int recursive_acquire_lock(MLOCK_T *lk) {
1930*2680e0c0SChristopher Ferris   THREAD_ID_T mythreadid = CURRENT_THREAD;
1931*2680e0c0SChristopher Ferris   int spins = 0;
1932*2680e0c0SChristopher Ferris   for (;;) {
1933*2680e0c0SChristopher Ferris     if (*((volatile int *)(&lk->sl)) == 0) {
1934*2680e0c0SChristopher Ferris       if (!CAS_LOCK(&lk->sl)) {
1935*2680e0c0SChristopher Ferris         lk->threadid = mythreadid;
1936*2680e0c0SChristopher Ferris         lk->c = 1;
1937*2680e0c0SChristopher Ferris         return 0;
1938*2680e0c0SChristopher Ferris       }
1939*2680e0c0SChristopher Ferris     }
1940*2680e0c0SChristopher Ferris     else if (EQ_OWNER(lk->threadid, mythreadid)) {
1941*2680e0c0SChristopher Ferris       ++lk->c;
1942*2680e0c0SChristopher Ferris       return 0;
1943*2680e0c0SChristopher Ferris     }
1944*2680e0c0SChristopher Ferris     if ((++spins & SPINS_PER_YIELD) == 0) {
1945*2680e0c0SChristopher Ferris       SPIN_LOCK_YIELD;
1946*2680e0c0SChristopher Ferris     }
1947*2680e0c0SChristopher Ferris   }
1948*2680e0c0SChristopher Ferris }
1949*2680e0c0SChristopher Ferris 
recursive_try_lock(MLOCK_T * lk)1950*2680e0c0SChristopher Ferris static FORCEINLINE int recursive_try_lock(MLOCK_T *lk) {
1951*2680e0c0SChristopher Ferris   THREAD_ID_T mythreadid = CURRENT_THREAD;
1952*2680e0c0SChristopher Ferris   if (*((volatile int *)(&lk->sl)) == 0) {
1953*2680e0c0SChristopher Ferris     if (!CAS_LOCK(&lk->sl)) {
1954*2680e0c0SChristopher Ferris       lk->threadid = mythreadid;
1955*2680e0c0SChristopher Ferris       lk->c = 1;
1956*2680e0c0SChristopher Ferris       return 1;
1957*2680e0c0SChristopher Ferris     }
1958*2680e0c0SChristopher Ferris   }
1959*2680e0c0SChristopher Ferris   else if (EQ_OWNER(lk->threadid, mythreadid)) {
1960*2680e0c0SChristopher Ferris     ++lk->c;
1961*2680e0c0SChristopher Ferris     return 1;
1962*2680e0c0SChristopher Ferris   }
1963*2680e0c0SChristopher Ferris   return 0;
1964*2680e0c0SChristopher Ferris }
1965*2680e0c0SChristopher Ferris 
1966*2680e0c0SChristopher Ferris #define RELEASE_LOCK(lk)      recursive_release_lock(lk)
1967*2680e0c0SChristopher Ferris #define TRY_LOCK(lk)          recursive_try_lock(lk)
1968*2680e0c0SChristopher Ferris #define ACQUIRE_LOCK(lk)      recursive_acquire_lock(lk)
1969*2680e0c0SChristopher Ferris #define INITIAL_LOCK(lk)      ((lk)->threadid = (THREAD_ID_T)0, (lk)->sl = 0, (lk)->c = 0)
1970*2680e0c0SChristopher Ferris #define DESTROY_LOCK(lk)      (0)
1971*2680e0c0SChristopher Ferris #endif /* USE_RECURSIVE_LOCKS */
1972*2680e0c0SChristopher Ferris 
1973*2680e0c0SChristopher Ferris #elif defined(WIN32) /* Win32 critical sections */
1974*2680e0c0SChristopher Ferris #define MLOCK_T               CRITICAL_SECTION
1975*2680e0c0SChristopher Ferris #define ACQUIRE_LOCK(lk)      (EnterCriticalSection(lk), 0)
1976*2680e0c0SChristopher Ferris #define RELEASE_LOCK(lk)      LeaveCriticalSection(lk)
1977*2680e0c0SChristopher Ferris #define TRY_LOCK(lk)          TryEnterCriticalSection(lk)
1978*2680e0c0SChristopher Ferris #define INITIAL_LOCK(lk)      (!InitializeCriticalSectionAndSpinCount((lk), 0x80000000|4000))
1979*2680e0c0SChristopher Ferris #define DESTROY_LOCK(lk)      (DeleteCriticalSection(lk), 0)
1980*2680e0c0SChristopher Ferris #define NEED_GLOBAL_LOCK_INIT
1981*2680e0c0SChristopher Ferris 
1982*2680e0c0SChristopher Ferris static MLOCK_T malloc_global_mutex;
1983*2680e0c0SChristopher Ferris static volatile LONG malloc_global_mutex_status;
1984*2680e0c0SChristopher Ferris 
1985*2680e0c0SChristopher Ferris /* Use spin loop to initialize global lock */
init_malloc_global_mutex()1986*2680e0c0SChristopher Ferris static void init_malloc_global_mutex() {
1987*2680e0c0SChristopher Ferris   for (;;) {
1988*2680e0c0SChristopher Ferris     long stat = malloc_global_mutex_status;
1989*2680e0c0SChristopher Ferris     if (stat > 0)
1990*2680e0c0SChristopher Ferris       return;
1991*2680e0c0SChristopher Ferris     /* transition to < 0 while initializing, then to > 0) */
1992*2680e0c0SChristopher Ferris     if (stat == 0 &&
1993*2680e0c0SChristopher Ferris         interlockedcompareexchange(&malloc_global_mutex_status, (LONG)-1, (LONG)0) == 0) {
1994*2680e0c0SChristopher Ferris       InitializeCriticalSection(&malloc_global_mutex);
1995*2680e0c0SChristopher Ferris       interlockedexchange(&malloc_global_mutex_status, (LONG)1);
1996*2680e0c0SChristopher Ferris       return;
1997*2680e0c0SChristopher Ferris     }
1998*2680e0c0SChristopher Ferris     SleepEx(0, FALSE);
1999*2680e0c0SChristopher Ferris   }
2000*2680e0c0SChristopher Ferris }
2001*2680e0c0SChristopher Ferris 
2002*2680e0c0SChristopher Ferris #else /* pthreads-based locks */
2003*2680e0c0SChristopher Ferris #define MLOCK_T               pthread_mutex_t
2004*2680e0c0SChristopher Ferris #define ACQUIRE_LOCK(lk)      pthread_mutex_lock(lk)
2005*2680e0c0SChristopher Ferris #define RELEASE_LOCK(lk)      pthread_mutex_unlock(lk)
2006*2680e0c0SChristopher Ferris #define TRY_LOCK(lk)          (!pthread_mutex_trylock(lk))
2007*2680e0c0SChristopher Ferris #define INITIAL_LOCK(lk)      pthread_init_lock(lk)
2008*2680e0c0SChristopher Ferris #define DESTROY_LOCK(lk)      pthread_mutex_destroy(lk)
2009*2680e0c0SChristopher Ferris 
2010*2680e0c0SChristopher Ferris #if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0 && defined(linux) && !defined(PTHREAD_MUTEX_RECURSIVE)
2011*2680e0c0SChristopher Ferris /* Cope with old-style linux recursive lock initialization by adding */
2012*2680e0c0SChristopher Ferris /* skipped internal declaration from pthread.h */
2013*2680e0c0SChristopher Ferris extern int pthread_mutexattr_setkind_np __P ((pthread_mutexattr_t *__attr,
2014*2680e0c0SChristopher Ferris                                               int __kind));
2015*2680e0c0SChristopher Ferris #define PTHREAD_MUTEX_RECURSIVE PTHREAD_MUTEX_RECURSIVE_NP
2016*2680e0c0SChristopher Ferris #define pthread_mutexattr_settype(x,y) pthread_mutexattr_setkind_np(x,y)
2017*2680e0c0SChristopher Ferris #endif /* USE_RECURSIVE_LOCKS ... */
2018*2680e0c0SChristopher Ferris 
2019*2680e0c0SChristopher Ferris static MLOCK_T malloc_global_mutex = PTHREAD_MUTEX_INITIALIZER;
2020*2680e0c0SChristopher Ferris 
pthread_init_lock(MLOCK_T * lk)2021*2680e0c0SChristopher Ferris static int pthread_init_lock (MLOCK_T *lk) {
2022*2680e0c0SChristopher Ferris   pthread_mutexattr_t attr;
2023*2680e0c0SChristopher Ferris   if (pthread_mutexattr_init(&attr)) return 1;
2024*2680e0c0SChristopher Ferris #if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0
2025*2680e0c0SChristopher Ferris   if (pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE)) return 1;
2026*2680e0c0SChristopher Ferris #endif
2027*2680e0c0SChristopher Ferris   if (pthread_mutex_init(lk, &attr)) return 1;
2028*2680e0c0SChristopher Ferris   if (pthread_mutexattr_destroy(&attr)) return 1;
2029*2680e0c0SChristopher Ferris   return 0;
2030*2680e0c0SChristopher Ferris }
2031*2680e0c0SChristopher Ferris 
2032*2680e0c0SChristopher Ferris #endif /* ... lock types ... */
2033*2680e0c0SChristopher Ferris 
2034*2680e0c0SChristopher Ferris /* Common code for all lock types */
2035*2680e0c0SChristopher Ferris #define USE_LOCK_BIT               (2U)
2036*2680e0c0SChristopher Ferris 
2037*2680e0c0SChristopher Ferris #ifndef ACQUIRE_MALLOC_GLOBAL_LOCK
2038*2680e0c0SChristopher Ferris #define ACQUIRE_MALLOC_GLOBAL_LOCK()  ACQUIRE_LOCK(&malloc_global_mutex);
2039*2680e0c0SChristopher Ferris #endif
2040*2680e0c0SChristopher Ferris 
2041*2680e0c0SChristopher Ferris #ifndef RELEASE_MALLOC_GLOBAL_LOCK
2042*2680e0c0SChristopher Ferris #define RELEASE_MALLOC_GLOBAL_LOCK()  RELEASE_LOCK(&malloc_global_mutex);
2043*2680e0c0SChristopher Ferris #endif
2044*2680e0c0SChristopher Ferris 
2045*2680e0c0SChristopher Ferris #endif /* USE_LOCKS */
2046*2680e0c0SChristopher Ferris 
2047*2680e0c0SChristopher Ferris /* -----------------------  Chunk representations ------------------------ */
2048*2680e0c0SChristopher Ferris 
2049*2680e0c0SChristopher Ferris /*
2050*2680e0c0SChristopher Ferris   (The following includes lightly edited explanations by Colin Plumb.)
2051*2680e0c0SChristopher Ferris 
2052*2680e0c0SChristopher Ferris   The malloc_chunk declaration below is misleading (but accurate and
2053*2680e0c0SChristopher Ferris   necessary).  It declares a "view" into memory allowing access to
2054*2680e0c0SChristopher Ferris   necessary fields at known offsets from a given base.
2055*2680e0c0SChristopher Ferris 
2056*2680e0c0SChristopher Ferris   Chunks of memory are maintained using a `boundary tag' method as
2057*2680e0c0SChristopher Ferris   originally described by Knuth.  (See the paper by Paul Wilson
2058*2680e0c0SChristopher Ferris   ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
2059*2680e0c0SChristopher Ferris   techniques.)  Sizes of free chunks are stored both in the front of
2060*2680e0c0SChristopher Ferris   each chunk and at the end.  This makes consolidating fragmented
2061*2680e0c0SChristopher Ferris   chunks into bigger chunks fast.  The head fields also hold bits
2062*2680e0c0SChristopher Ferris   representing whether chunks are free or in use.
2063*2680e0c0SChristopher Ferris 
2064*2680e0c0SChristopher Ferris   Here are some pictures to make it clearer.  They are "exploded" to
2065*2680e0c0SChristopher Ferris   show that the state of a chunk can be thought of as extending from
2066*2680e0c0SChristopher Ferris   the high 31 bits of the head field of its header through the
2067*2680e0c0SChristopher Ferris   prev_foot and PINUSE_BIT bit of the following chunk header.
2068*2680e0c0SChristopher Ferris 
2069*2680e0c0SChristopher Ferris   A chunk that's in use looks like:
2070*2680e0c0SChristopher Ferris 
2071*2680e0c0SChristopher Ferris    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2072*2680e0c0SChristopher Ferris            | Size of previous chunk (if P = 0)                             |
2073*2680e0c0SChristopher Ferris            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2074*2680e0c0SChristopher Ferris          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2075*2680e0c0SChristopher Ferris          | Size of this chunk                                         1| +-+
2076*2680e0c0SChristopher Ferris    mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2077*2680e0c0SChristopher Ferris          |                                                               |
2078*2680e0c0SChristopher Ferris          +-                                                             -+
2079*2680e0c0SChristopher Ferris          |                                                               |
2080*2680e0c0SChristopher Ferris          +-                                                             -+
2081*2680e0c0SChristopher Ferris          |                                                               :
2082*2680e0c0SChristopher Ferris          +-      size - sizeof(size_t) available payload bytes          -+
2083*2680e0c0SChristopher Ferris          :                                                               |
2084*2680e0c0SChristopher Ferris  chunk-> +-                                                             -+
2085*2680e0c0SChristopher Ferris          |                                                               |
2086*2680e0c0SChristopher Ferris          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2087*2680e0c0SChristopher Ferris        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
2088*2680e0c0SChristopher Ferris        | Size of next chunk (may or may not be in use)               | +-+
2089*2680e0c0SChristopher Ferris  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2090*2680e0c0SChristopher Ferris 
2091*2680e0c0SChristopher Ferris     And if it's free, it looks like this:
2092*2680e0c0SChristopher Ferris 
2093*2680e0c0SChristopher Ferris    chunk-> +-                                                             -+
2094*2680e0c0SChristopher Ferris            | User payload (must be in use, or we would have merged!)       |
2095*2680e0c0SChristopher Ferris            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2096*2680e0c0SChristopher Ferris          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2097*2680e0c0SChristopher Ferris          | Size of this chunk                                         0| +-+
2098*2680e0c0SChristopher Ferris    mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2099*2680e0c0SChristopher Ferris          | Next pointer                                                  |
2100*2680e0c0SChristopher Ferris          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2101*2680e0c0SChristopher Ferris          | Prev pointer                                                  |
2102*2680e0c0SChristopher Ferris          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2103*2680e0c0SChristopher Ferris          |                                                               :
2104*2680e0c0SChristopher Ferris          +-      size - sizeof(struct chunk) unused bytes               -+
2105*2680e0c0SChristopher Ferris          :                                                               |
2106*2680e0c0SChristopher Ferris  chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2107*2680e0c0SChristopher Ferris          | Size of this chunk                                            |
2108*2680e0c0SChristopher Ferris          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2109*2680e0c0SChristopher Ferris        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
2110*2680e0c0SChristopher Ferris        | Size of next chunk (must be in use, or we would have merged)| +-+
2111*2680e0c0SChristopher Ferris  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2112*2680e0c0SChristopher Ferris        |                                                               :
2113*2680e0c0SChristopher Ferris        +- User payload                                                -+
2114*2680e0c0SChristopher Ferris        :                                                               |
2115*2680e0c0SChristopher Ferris        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2116*2680e0c0SChristopher Ferris                                                                      |0|
2117*2680e0c0SChristopher Ferris                                                                      +-+
2118*2680e0c0SChristopher Ferris   Note that since we always merge adjacent free chunks, the chunks
2119*2680e0c0SChristopher Ferris   adjacent to a free chunk must be in use.
2120*2680e0c0SChristopher Ferris 
2121*2680e0c0SChristopher Ferris   Given a pointer to a chunk (which can be derived trivially from the
2122*2680e0c0SChristopher Ferris   payload pointer) we can, in O(1) time, find out whether the adjacent
2123*2680e0c0SChristopher Ferris   chunks are free, and if so, unlink them from the lists that they
2124*2680e0c0SChristopher Ferris   are on and merge them with the current chunk.
2125*2680e0c0SChristopher Ferris 
2126*2680e0c0SChristopher Ferris   Chunks always begin on even word boundaries, so the mem portion
2127*2680e0c0SChristopher Ferris   (which is returned to the user) is also on an even word boundary, and
2128*2680e0c0SChristopher Ferris   thus at least double-word aligned.
2129*2680e0c0SChristopher Ferris 
2130*2680e0c0SChristopher Ferris   The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
2131*2680e0c0SChristopher Ferris   chunk size (which is always a multiple of two words), is an in-use
2132*2680e0c0SChristopher Ferris   bit for the *previous* chunk.  If that bit is *clear*, then the
2133*2680e0c0SChristopher Ferris   word before the current chunk size contains the previous chunk
2134*2680e0c0SChristopher Ferris   size, and can be used to find the front of the previous chunk.
2135*2680e0c0SChristopher Ferris   The very first chunk allocated always has this bit set, preventing
2136*2680e0c0SChristopher Ferris   access to non-existent (or non-owned) memory. If pinuse is set for
2137*2680e0c0SChristopher Ferris   any given chunk, then you CANNOT determine the size of the
2138*2680e0c0SChristopher Ferris   previous chunk, and might even get a memory addressing fault when
2139*2680e0c0SChristopher Ferris   trying to do so.
2140*2680e0c0SChristopher Ferris 
2141*2680e0c0SChristopher Ferris   The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
2142*2680e0c0SChristopher Ferris   the chunk size redundantly records whether the current chunk is
2143*2680e0c0SChristopher Ferris   inuse (unless the chunk is mmapped). This redundancy enables usage
2144*2680e0c0SChristopher Ferris   checks within free and realloc, and reduces indirection when freeing
2145*2680e0c0SChristopher Ferris   and consolidating chunks.
2146*2680e0c0SChristopher Ferris 
2147*2680e0c0SChristopher Ferris   Each freshly allocated chunk must have both cinuse and pinuse set.
2148*2680e0c0SChristopher Ferris   That is, each allocated chunk borders either a previously allocated
2149*2680e0c0SChristopher Ferris   and still in-use chunk, or the base of its memory arena. This is
2150*2680e0c0SChristopher Ferris   ensured by making all allocations from the `lowest' part of any
2151*2680e0c0SChristopher Ferris   found chunk.  Further, no free chunk physically borders another one,
2152*2680e0c0SChristopher Ferris   so each free chunk is known to be preceded and followed by either
2153*2680e0c0SChristopher Ferris   inuse chunks or the ends of memory.
2154*2680e0c0SChristopher Ferris 
2155*2680e0c0SChristopher Ferris   Note that the `foot' of the current chunk is actually represented
2156*2680e0c0SChristopher Ferris   as the prev_foot of the NEXT chunk. This makes it easier to
2157*2680e0c0SChristopher Ferris   deal with alignments etc but can be very confusing when trying
2158*2680e0c0SChristopher Ferris   to extend or adapt this code.
2159*2680e0c0SChristopher Ferris 
2160*2680e0c0SChristopher Ferris   The exceptions to all this are
2161*2680e0c0SChristopher Ferris 
2162*2680e0c0SChristopher Ferris      1. The special chunk `top' is the top-most available chunk (i.e.,
2163*2680e0c0SChristopher Ferris         the one bordering the end of available memory). It is treated
2164*2680e0c0SChristopher Ferris         specially.  Top is never included in any bin, is used only if
2165*2680e0c0SChristopher Ferris         no other chunk is available, and is released back to the
2166*2680e0c0SChristopher Ferris         system if it is very large (see M_TRIM_THRESHOLD).  In effect,
2167*2680e0c0SChristopher Ferris         the top chunk is treated as larger (and thus less well
2168*2680e0c0SChristopher Ferris         fitting) than any other available chunk.  The top chunk
2169*2680e0c0SChristopher Ferris         doesn't update its trailing size field since there is no next
2170*2680e0c0SChristopher Ferris         contiguous chunk that would have to index off it. However,
2171*2680e0c0SChristopher Ferris         space is still allocated for it (TOP_FOOT_SIZE) to enable
2172*2680e0c0SChristopher Ferris         separation or merging when space is extended.
2173*2680e0c0SChristopher Ferris 
2174*2680e0c0SChristopher Ferris      3. Chunks allocated via mmap, have both cinuse and pinuse bits
2175*2680e0c0SChristopher Ferris         cleared in their head fields.  Because they are allocated
2176*2680e0c0SChristopher Ferris         one-by-one, each must carry its own prev_foot field, which is
2177*2680e0c0SChristopher Ferris         also used to hold the offset this chunk has within its mmapped
2178*2680e0c0SChristopher Ferris         region, which is needed to preserve alignment. Each mmapped
2179*2680e0c0SChristopher Ferris         chunk is trailed by the first two fields of a fake next-chunk
2180*2680e0c0SChristopher Ferris         for sake of usage checks.
2181*2680e0c0SChristopher Ferris 
2182*2680e0c0SChristopher Ferris */
2183*2680e0c0SChristopher Ferris 
2184*2680e0c0SChristopher Ferris struct malloc_chunk {
2185*2680e0c0SChristopher Ferris   size_t               prev_foot;  /* Size of previous chunk (if free).  */
2186*2680e0c0SChristopher Ferris   size_t               head;       /* Size and inuse bits. */
2187*2680e0c0SChristopher Ferris   struct malloc_chunk* fd;         /* double links -- used only if free. */
2188*2680e0c0SChristopher Ferris   struct malloc_chunk* bk;
2189*2680e0c0SChristopher Ferris };
2190*2680e0c0SChristopher Ferris 
2191*2680e0c0SChristopher Ferris typedef struct malloc_chunk  mchunk;
2192*2680e0c0SChristopher Ferris typedef struct malloc_chunk* mchunkptr;
2193*2680e0c0SChristopher Ferris typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
2194*2680e0c0SChristopher Ferris typedef unsigned int bindex_t;         /* Described below */
2195*2680e0c0SChristopher Ferris typedef unsigned int binmap_t;         /* Described below */
2196*2680e0c0SChristopher Ferris typedef unsigned int flag_t;           /* The type of various bit flag sets */
2197*2680e0c0SChristopher Ferris 
2198*2680e0c0SChristopher Ferris /* ------------------- Chunks sizes and alignments ----------------------- */
2199*2680e0c0SChristopher Ferris 
2200*2680e0c0SChristopher Ferris #define MCHUNK_SIZE         (sizeof(mchunk))
2201*2680e0c0SChristopher Ferris 
2202*2680e0c0SChristopher Ferris #if FOOTERS
2203*2680e0c0SChristopher Ferris #define CHUNK_OVERHEAD      (TWO_SIZE_T_SIZES)
2204*2680e0c0SChristopher Ferris #else /* FOOTERS */
2205*2680e0c0SChristopher Ferris #define CHUNK_OVERHEAD      (SIZE_T_SIZE)
2206*2680e0c0SChristopher Ferris #endif /* FOOTERS */
2207*2680e0c0SChristopher Ferris 
2208*2680e0c0SChristopher Ferris /* MMapped chunks need a second word of overhead ... */
2209*2680e0c0SChristopher Ferris #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2210*2680e0c0SChristopher Ferris /* ... and additional padding for fake next-chunk at foot */
2211*2680e0c0SChristopher Ferris #define MMAP_FOOT_PAD       (FOUR_SIZE_T_SIZES)
2212*2680e0c0SChristopher Ferris 
2213*2680e0c0SChristopher Ferris /* The smallest size we can malloc is an aligned minimal chunk */
2214*2680e0c0SChristopher Ferris #define MIN_CHUNK_SIZE\
2215*2680e0c0SChristopher Ferris   ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2216*2680e0c0SChristopher Ferris 
2217*2680e0c0SChristopher Ferris /* conversion from malloc headers to user pointers, and back */
2218*2680e0c0SChristopher Ferris #define chunk2mem(p)        ((void*)((char*)(p)       + TWO_SIZE_T_SIZES))
2219*2680e0c0SChristopher Ferris #define mem2chunk(mem)      ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
2220*2680e0c0SChristopher Ferris /* chunk associated with aligned address A */
2221*2680e0c0SChristopher Ferris #define align_as_chunk(A)   (mchunkptr)((A) + align_offset(chunk2mem(A)))
2222*2680e0c0SChristopher Ferris 
2223*2680e0c0SChristopher Ferris /* Bounds on request (not chunk) sizes. */
2224*2680e0c0SChristopher Ferris #define MAX_REQUEST         ((-MIN_CHUNK_SIZE) << 2)
2225*2680e0c0SChristopher Ferris #define MIN_REQUEST         (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
2226*2680e0c0SChristopher Ferris 
2227*2680e0c0SChristopher Ferris /* pad request bytes into a usable size */
2228*2680e0c0SChristopher Ferris #define pad_request(req) \
2229*2680e0c0SChristopher Ferris    (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2230*2680e0c0SChristopher Ferris 
2231*2680e0c0SChristopher Ferris /* pad request, checking for minimum (but not maximum) */
2232*2680e0c0SChristopher Ferris #define request2size(req) \
2233*2680e0c0SChristopher Ferris   (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
2234*2680e0c0SChristopher Ferris 
2235*2680e0c0SChristopher Ferris 
2236*2680e0c0SChristopher Ferris /* ------------------ Operations on head and foot fields ----------------- */
2237*2680e0c0SChristopher Ferris 
2238*2680e0c0SChristopher Ferris /*
2239*2680e0c0SChristopher Ferris   The head field of a chunk is or'ed with PINUSE_BIT when previous
2240*2680e0c0SChristopher Ferris   adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
2241*2680e0c0SChristopher Ferris   use, unless mmapped, in which case both bits are cleared.
2242*2680e0c0SChristopher Ferris 
2243*2680e0c0SChristopher Ferris   FLAG4_BIT is not used by this malloc, but might be useful in extensions.
2244*2680e0c0SChristopher Ferris */
2245*2680e0c0SChristopher Ferris 
2246*2680e0c0SChristopher Ferris #define PINUSE_BIT          (SIZE_T_ONE)
2247*2680e0c0SChristopher Ferris #define CINUSE_BIT          (SIZE_T_TWO)
2248*2680e0c0SChristopher Ferris #define FLAG4_BIT           (SIZE_T_FOUR)
2249*2680e0c0SChristopher Ferris #define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
2250*2680e0c0SChristopher Ferris #define FLAG_BITS           (PINUSE_BIT|CINUSE_BIT|FLAG4_BIT)
2251*2680e0c0SChristopher Ferris 
2252*2680e0c0SChristopher Ferris /* Head value for fenceposts */
2253*2680e0c0SChristopher Ferris #define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
2254*2680e0c0SChristopher Ferris 
2255*2680e0c0SChristopher Ferris /* extraction of fields from head words */
2256*2680e0c0SChristopher Ferris #define cinuse(p)           ((p)->head & CINUSE_BIT)
2257*2680e0c0SChristopher Ferris #define pinuse(p)           ((p)->head & PINUSE_BIT)
2258*2680e0c0SChristopher Ferris #define flag4inuse(p)       ((p)->head & FLAG4_BIT)
2259*2680e0c0SChristopher Ferris #define is_inuse(p)         (((p)->head & INUSE_BITS) != PINUSE_BIT)
2260*2680e0c0SChristopher Ferris #define is_mmapped(p)       (((p)->head & INUSE_BITS) == 0)
2261*2680e0c0SChristopher Ferris 
2262*2680e0c0SChristopher Ferris #define chunksize(p)        ((p)->head & ~(FLAG_BITS))
2263*2680e0c0SChristopher Ferris 
2264*2680e0c0SChristopher Ferris #define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
2265*2680e0c0SChristopher Ferris #define set_flag4(p)        ((p)->head |= FLAG4_BIT)
2266*2680e0c0SChristopher Ferris #define clear_flag4(p)      ((p)->head &= ~FLAG4_BIT)
2267*2680e0c0SChristopher Ferris 
2268*2680e0c0SChristopher Ferris /* Treat space at ptr +/- offset as a chunk */
2269*2680e0c0SChristopher Ferris #define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
2270*2680e0c0SChristopher Ferris #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
2271*2680e0c0SChristopher Ferris 
2272*2680e0c0SChristopher Ferris /* Ptr to next or previous physical malloc_chunk. */
2273*2680e0c0SChristopher Ferris #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~FLAG_BITS)))
2274*2680e0c0SChristopher Ferris #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
2275*2680e0c0SChristopher Ferris 
2276*2680e0c0SChristopher Ferris /* extract next chunk's pinuse bit */
2277*2680e0c0SChristopher Ferris #define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
2278*2680e0c0SChristopher Ferris 
2279*2680e0c0SChristopher Ferris /* Get/set size at footer */
2280*2680e0c0SChristopher Ferris #define get_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot)
2281*2680e0c0SChristopher Ferris #define set_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
2282*2680e0c0SChristopher Ferris 
2283*2680e0c0SChristopher Ferris /* Set size, pinuse bit, and foot */
2284*2680e0c0SChristopher Ferris #define set_size_and_pinuse_of_free_chunk(p, s)\
2285*2680e0c0SChristopher Ferris   ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
2286*2680e0c0SChristopher Ferris 
2287*2680e0c0SChristopher Ferris /* Set size, pinuse bit, foot, and clear next pinuse */
2288*2680e0c0SChristopher Ferris #define set_free_with_pinuse(p, s, n)\
2289*2680e0c0SChristopher Ferris   (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
2290*2680e0c0SChristopher Ferris 
2291*2680e0c0SChristopher Ferris /* Get the internal overhead associated with chunk p */
2292*2680e0c0SChristopher Ferris #define overhead_for(p)\
2293*2680e0c0SChristopher Ferris  (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
2294*2680e0c0SChristopher Ferris 
2295*2680e0c0SChristopher Ferris /* Return true if malloced space is not necessarily cleared */
2296*2680e0c0SChristopher Ferris #if MMAP_CLEARS
2297*2680e0c0SChristopher Ferris #define calloc_must_clear(p) (!is_mmapped(p))
2298*2680e0c0SChristopher Ferris #else /* MMAP_CLEARS */
2299*2680e0c0SChristopher Ferris #define calloc_must_clear(p) (1)
2300*2680e0c0SChristopher Ferris #endif /* MMAP_CLEARS */
2301*2680e0c0SChristopher Ferris 
2302*2680e0c0SChristopher Ferris /* ---------------------- Overlaid data structures ----------------------- */
2303*2680e0c0SChristopher Ferris 
2304*2680e0c0SChristopher Ferris /*
2305*2680e0c0SChristopher Ferris   When chunks are not in use, they are treated as nodes of either
2306*2680e0c0SChristopher Ferris   lists or trees.
2307*2680e0c0SChristopher Ferris 
2308*2680e0c0SChristopher Ferris   "Small"  chunks are stored in circular doubly-linked lists, and look
2309*2680e0c0SChristopher Ferris   like this:
2310*2680e0c0SChristopher Ferris 
2311*2680e0c0SChristopher Ferris     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2312*2680e0c0SChristopher Ferris             |             Size of previous chunk                            |
2313*2680e0c0SChristopher Ferris             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2314*2680e0c0SChristopher Ferris     `head:' |             Size of chunk, in bytes                         |P|
2315*2680e0c0SChristopher Ferris       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2316*2680e0c0SChristopher Ferris             |             Forward pointer to next chunk in list             |
2317*2680e0c0SChristopher Ferris             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2318*2680e0c0SChristopher Ferris             |             Back pointer to previous chunk in list            |
2319*2680e0c0SChristopher Ferris             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2320*2680e0c0SChristopher Ferris             |             Unused space (may be 0 bytes long)                .
2321*2680e0c0SChristopher Ferris             .                                                               .
2322*2680e0c0SChristopher Ferris             .                                                               |
2323*2680e0c0SChristopher Ferris nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2324*2680e0c0SChristopher Ferris     `foot:' |             Size of chunk, in bytes                           |
2325*2680e0c0SChristopher Ferris             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2326*2680e0c0SChristopher Ferris 
2327*2680e0c0SChristopher Ferris   Larger chunks are kept in a form of bitwise digital trees (aka
2328*2680e0c0SChristopher Ferris   tries) keyed on chunksizes.  Because malloc_tree_chunks are only for
2329*2680e0c0SChristopher Ferris   free chunks greater than 256 bytes, their size doesn't impose any
2330*2680e0c0SChristopher Ferris   constraints on user chunk sizes.  Each node looks like:
2331*2680e0c0SChristopher Ferris 
2332*2680e0c0SChristopher Ferris     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2333*2680e0c0SChristopher Ferris             |             Size of previous chunk                            |
2334*2680e0c0SChristopher Ferris             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2335*2680e0c0SChristopher Ferris     `head:' |             Size of chunk, in bytes                         |P|
2336*2680e0c0SChristopher Ferris       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2337*2680e0c0SChristopher Ferris             |             Forward pointer to next chunk of same size        |
2338*2680e0c0SChristopher Ferris             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2339*2680e0c0SChristopher Ferris             |             Back pointer to previous chunk of same size       |
2340*2680e0c0SChristopher Ferris             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2341*2680e0c0SChristopher Ferris             |             Pointer to left child (child[0])                  |
2342*2680e0c0SChristopher Ferris             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2343*2680e0c0SChristopher Ferris             |             Pointer to right child (child[1])                 |
2344*2680e0c0SChristopher Ferris             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2345*2680e0c0SChristopher Ferris             |             Pointer to parent                                 |
2346*2680e0c0SChristopher Ferris             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2347*2680e0c0SChristopher Ferris             |             bin index of this chunk                           |
2348*2680e0c0SChristopher Ferris             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2349*2680e0c0SChristopher Ferris             |             Unused space                                      .
2350*2680e0c0SChristopher Ferris             .                                                               |
2351*2680e0c0SChristopher Ferris nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2352*2680e0c0SChristopher Ferris     `foot:' |             Size of chunk, in bytes                           |
2353*2680e0c0SChristopher Ferris             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2354*2680e0c0SChristopher Ferris 
2355*2680e0c0SChristopher Ferris   Each tree holding treenodes is a tree of unique chunk sizes.  Chunks
2356*2680e0c0SChristopher Ferris   of the same size are arranged in a circularly-linked list, with only
2357*2680e0c0SChristopher Ferris   the oldest chunk (the next to be used, in our FIFO ordering)
2358*2680e0c0SChristopher Ferris   actually in the tree.  (Tree members are distinguished by a non-null
2359*2680e0c0SChristopher Ferris   parent pointer.)  If a chunk with the same size an an existing node
2360*2680e0c0SChristopher Ferris   is inserted, it is linked off the existing node using pointers that
2361*2680e0c0SChristopher Ferris   work in the same way as fd/bk pointers of small chunks.
2362*2680e0c0SChristopher Ferris 
2363*2680e0c0SChristopher Ferris   Each tree contains a power of 2 sized range of chunk sizes (the
2364*2680e0c0SChristopher Ferris   smallest is 0x100 <= x < 0x180), which is is divided in half at each
2365*2680e0c0SChristopher Ferris   tree level, with the chunks in the smaller half of the range (0x100
2366*2680e0c0SChristopher Ferris   <= x < 0x140 for the top nose) in the left subtree and the larger
2367*2680e0c0SChristopher Ferris   half (0x140 <= x < 0x180) in the right subtree.  This is, of course,
2368*2680e0c0SChristopher Ferris   done by inspecting individual bits.
2369*2680e0c0SChristopher Ferris 
2370*2680e0c0SChristopher Ferris   Using these rules, each node's left subtree contains all smaller
2371*2680e0c0SChristopher Ferris   sizes than its right subtree.  However, the node at the root of each
2372*2680e0c0SChristopher Ferris   subtree has no particular ordering relationship to either.  (The
2373*2680e0c0SChristopher Ferris   dividing line between the subtree sizes is based on trie relation.)
2374*2680e0c0SChristopher Ferris   If we remove the last chunk of a given size from the interior of the
2375*2680e0c0SChristopher Ferris   tree, we need to replace it with a leaf node.  The tree ordering
2376*2680e0c0SChristopher Ferris   rules permit a node to be replaced by any leaf below it.
2377*2680e0c0SChristopher Ferris 
2378*2680e0c0SChristopher Ferris   The smallest chunk in a tree (a common operation in a best-fit
2379*2680e0c0SChristopher Ferris   allocator) can be found by walking a path to the leftmost leaf in
2380*2680e0c0SChristopher Ferris   the tree.  Unlike a usual binary tree, where we follow left child
2381*2680e0c0SChristopher Ferris   pointers until we reach a null, here we follow the right child
2382*2680e0c0SChristopher Ferris   pointer any time the left one is null, until we reach a leaf with
2383*2680e0c0SChristopher Ferris   both child pointers null. The smallest chunk in the tree will be
2384*2680e0c0SChristopher Ferris   somewhere along that path.
2385*2680e0c0SChristopher Ferris 
2386*2680e0c0SChristopher Ferris   The worst case number of steps to add, find, or remove a node is
2387*2680e0c0SChristopher Ferris   bounded by the number of bits differentiating chunks within
2388*2680e0c0SChristopher Ferris   bins. Under current bin calculations, this ranges from 6 up to 21
2389*2680e0c0SChristopher Ferris   (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
2390*2680e0c0SChristopher Ferris   is of course much better.
2391*2680e0c0SChristopher Ferris */
2392*2680e0c0SChristopher Ferris 
2393*2680e0c0SChristopher Ferris struct malloc_tree_chunk {
2394*2680e0c0SChristopher Ferris   /* The first four fields must be compatible with malloc_chunk */
2395*2680e0c0SChristopher Ferris   size_t                    prev_foot;
2396*2680e0c0SChristopher Ferris   size_t                    head;
2397*2680e0c0SChristopher Ferris   struct malloc_tree_chunk* fd;
2398*2680e0c0SChristopher Ferris   struct malloc_tree_chunk* bk;
2399*2680e0c0SChristopher Ferris 
2400*2680e0c0SChristopher Ferris   struct malloc_tree_chunk* child[2];
2401*2680e0c0SChristopher Ferris   struct malloc_tree_chunk* parent;
2402*2680e0c0SChristopher Ferris   bindex_t                  index;
2403*2680e0c0SChristopher Ferris };
2404*2680e0c0SChristopher Ferris 
2405*2680e0c0SChristopher Ferris typedef struct malloc_tree_chunk  tchunk;
2406*2680e0c0SChristopher Ferris typedef struct malloc_tree_chunk* tchunkptr;
2407*2680e0c0SChristopher Ferris typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
2408*2680e0c0SChristopher Ferris 
2409*2680e0c0SChristopher Ferris /* A little helper macro for trees */
2410*2680e0c0SChristopher Ferris #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
2411*2680e0c0SChristopher Ferris 
2412*2680e0c0SChristopher Ferris /* ----------------------------- Segments -------------------------------- */
2413*2680e0c0SChristopher Ferris 
2414*2680e0c0SChristopher Ferris /*
2415*2680e0c0SChristopher Ferris   Each malloc space may include non-contiguous segments, held in a
2416*2680e0c0SChristopher Ferris   list headed by an embedded malloc_segment record representing the
2417*2680e0c0SChristopher Ferris   top-most space. Segments also include flags holding properties of
2418*2680e0c0SChristopher Ferris   the space. Large chunks that are directly allocated by mmap are not
2419*2680e0c0SChristopher Ferris   included in this list. They are instead independently created and
2420*2680e0c0SChristopher Ferris   destroyed without otherwise keeping track of them.
2421*2680e0c0SChristopher Ferris 
2422*2680e0c0SChristopher Ferris   Segment management mainly comes into play for spaces allocated by
2423*2680e0c0SChristopher Ferris   MMAP.  Any call to MMAP might or might not return memory that is
2424*2680e0c0SChristopher Ferris   adjacent to an existing segment.  MORECORE normally contiguously
2425*2680e0c0SChristopher Ferris   extends the current space, so this space is almost always adjacent,
2426*2680e0c0SChristopher Ferris   which is simpler and faster to deal with. (This is why MORECORE is
2427*2680e0c0SChristopher Ferris   used preferentially to MMAP when both are available -- see
2428*2680e0c0SChristopher Ferris   sys_alloc.)  When allocating using MMAP, we don't use any of the
2429*2680e0c0SChristopher Ferris   hinting mechanisms (inconsistently) supported in various
2430*2680e0c0SChristopher Ferris   implementations of unix mmap, or distinguish reserving from
2431*2680e0c0SChristopher Ferris   committing memory. Instead, we just ask for space, and exploit
2432*2680e0c0SChristopher Ferris   contiguity when we get it.  It is probably possible to do
2433*2680e0c0SChristopher Ferris   better than this on some systems, but no general scheme seems
2434*2680e0c0SChristopher Ferris   to be significantly better.
2435*2680e0c0SChristopher Ferris 
2436*2680e0c0SChristopher Ferris   Management entails a simpler variant of the consolidation scheme
2437*2680e0c0SChristopher Ferris   used for chunks to reduce fragmentation -- new adjacent memory is
2438*2680e0c0SChristopher Ferris   normally prepended or appended to an existing segment. However,
2439*2680e0c0SChristopher Ferris   there are limitations compared to chunk consolidation that mostly
2440*2680e0c0SChristopher Ferris   reflect the fact that segment processing is relatively infrequent
2441*2680e0c0SChristopher Ferris   (occurring only when getting memory from system) and that we
2442*2680e0c0SChristopher Ferris   don't expect to have huge numbers of segments:
2443*2680e0c0SChristopher Ferris 
2444*2680e0c0SChristopher Ferris   * Segments are not indexed, so traversal requires linear scans.  (It
2445*2680e0c0SChristopher Ferris     would be possible to index these, but is not worth the extra
2446*2680e0c0SChristopher Ferris     overhead and complexity for most programs on most platforms.)
2447*2680e0c0SChristopher Ferris   * New segments are only appended to old ones when holding top-most
2448*2680e0c0SChristopher Ferris     memory; if they cannot be prepended to others, they are held in
2449*2680e0c0SChristopher Ferris     different segments.
2450*2680e0c0SChristopher Ferris 
2451*2680e0c0SChristopher Ferris   Except for the top-most segment of an mstate, each segment record
2452*2680e0c0SChristopher Ferris   is kept at the tail of its segment. Segments are added by pushing
2453*2680e0c0SChristopher Ferris   segment records onto the list headed by &mstate.seg for the
2454*2680e0c0SChristopher Ferris   containing mstate.
2455*2680e0c0SChristopher Ferris 
2456*2680e0c0SChristopher Ferris   Segment flags control allocation/merge/deallocation policies:
2457*2680e0c0SChristopher Ferris   * If EXTERN_BIT set, then we did not allocate this segment,
2458*2680e0c0SChristopher Ferris     and so should not try to deallocate or merge with others.
2459*2680e0c0SChristopher Ferris     (This currently holds only for the initial segment passed
2460*2680e0c0SChristopher Ferris     into create_mspace_with_base.)
2461*2680e0c0SChristopher Ferris   * If USE_MMAP_BIT set, the segment may be merged with
2462*2680e0c0SChristopher Ferris     other surrounding mmapped segments and trimmed/de-allocated
2463*2680e0c0SChristopher Ferris     using munmap.
2464*2680e0c0SChristopher Ferris   * If neither bit is set, then the segment was obtained using
2465*2680e0c0SChristopher Ferris     MORECORE so can be merged with surrounding MORECORE'd segments
2466*2680e0c0SChristopher Ferris     and deallocated/trimmed using MORECORE with negative arguments.
2467*2680e0c0SChristopher Ferris */
2468*2680e0c0SChristopher Ferris 
2469*2680e0c0SChristopher Ferris struct malloc_segment {
2470*2680e0c0SChristopher Ferris   char*        base;             /* base address */
2471*2680e0c0SChristopher Ferris   size_t       size;             /* allocated size */
2472*2680e0c0SChristopher Ferris   struct malloc_segment* next;   /* ptr to next segment */
2473*2680e0c0SChristopher Ferris   flag_t       sflags;           /* mmap and extern flag */
2474*2680e0c0SChristopher Ferris };
2475*2680e0c0SChristopher Ferris 
2476*2680e0c0SChristopher Ferris #define is_mmapped_segment(S)  ((S)->sflags & USE_MMAP_BIT)
2477*2680e0c0SChristopher Ferris #define is_extern_segment(S)   ((S)->sflags & EXTERN_BIT)
2478*2680e0c0SChristopher Ferris 
2479*2680e0c0SChristopher Ferris typedef struct malloc_segment  msegment;
2480*2680e0c0SChristopher Ferris typedef struct malloc_segment* msegmentptr;
2481*2680e0c0SChristopher Ferris 
2482*2680e0c0SChristopher Ferris /* ---------------------------- malloc_state ----------------------------- */
2483*2680e0c0SChristopher Ferris 
2484*2680e0c0SChristopher Ferris /*
2485*2680e0c0SChristopher Ferris    A malloc_state holds all of the bookkeeping for a space.
2486*2680e0c0SChristopher Ferris    The main fields are:
2487*2680e0c0SChristopher Ferris 
2488*2680e0c0SChristopher Ferris   Top
2489*2680e0c0SChristopher Ferris     The topmost chunk of the currently active segment. Its size is
2490*2680e0c0SChristopher Ferris     cached in topsize.  The actual size of topmost space is
2491*2680e0c0SChristopher Ferris     topsize+TOP_FOOT_SIZE, which includes space reserved for adding
2492*2680e0c0SChristopher Ferris     fenceposts and segment records if necessary when getting more
2493*2680e0c0SChristopher Ferris     space from the system.  The size at which to autotrim top is
2494*2680e0c0SChristopher Ferris     cached from mparams in trim_check, except that it is disabled if
2495*2680e0c0SChristopher Ferris     an autotrim fails.
2496*2680e0c0SChristopher Ferris 
2497*2680e0c0SChristopher Ferris   Designated victim (dv)
2498*2680e0c0SChristopher Ferris     This is the preferred chunk for servicing small requests that
2499*2680e0c0SChristopher Ferris     don't have exact fits.  It is normally the chunk split off most
2500*2680e0c0SChristopher Ferris     recently to service another small request.  Its size is cached in
2501*2680e0c0SChristopher Ferris     dvsize. The link fields of this chunk are not maintained since it
2502*2680e0c0SChristopher Ferris     is not kept in a bin.
2503*2680e0c0SChristopher Ferris 
2504*2680e0c0SChristopher Ferris   SmallBins
2505*2680e0c0SChristopher Ferris     An array of bin headers for free chunks.  These bins hold chunks
2506*2680e0c0SChristopher Ferris     with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
2507*2680e0c0SChristopher Ferris     chunks of all the same size, spaced 8 bytes apart.  To simplify
2508*2680e0c0SChristopher Ferris     use in double-linked lists, each bin header acts as a malloc_chunk
2509*2680e0c0SChristopher Ferris     pointing to the real first node, if it exists (else pointing to
2510*2680e0c0SChristopher Ferris     itself).  This avoids special-casing for headers.  But to avoid
2511*2680e0c0SChristopher Ferris     waste, we allocate only the fd/bk pointers of bins, and then use
2512*2680e0c0SChristopher Ferris     repositioning tricks to treat these as the fields of a chunk.
2513*2680e0c0SChristopher Ferris 
2514*2680e0c0SChristopher Ferris   TreeBins
2515*2680e0c0SChristopher Ferris     Treebins are pointers to the roots of trees holding a range of
2516*2680e0c0SChristopher Ferris     sizes. There are 2 equally spaced treebins for each power of two
2517*2680e0c0SChristopher Ferris     from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
2518*2680e0c0SChristopher Ferris     larger.
2519*2680e0c0SChristopher Ferris 
2520*2680e0c0SChristopher Ferris   Bin maps
2521*2680e0c0SChristopher Ferris     There is one bit map for small bins ("smallmap") and one for
2522*2680e0c0SChristopher Ferris     treebins ("treemap).  Each bin sets its bit when non-empty, and
2523*2680e0c0SChristopher Ferris     clears the bit when empty.  Bit operations are then used to avoid
2524*2680e0c0SChristopher Ferris     bin-by-bin searching -- nearly all "search" is done without ever
2525*2680e0c0SChristopher Ferris     looking at bins that won't be selected.  The bit maps
2526*2680e0c0SChristopher Ferris     conservatively use 32 bits per map word, even if on 64bit system.
2527*2680e0c0SChristopher Ferris     For a good description of some of the bit-based techniques used
2528*2680e0c0SChristopher Ferris     here, see Henry S. Warren Jr's book "Hacker's Delight" (and
2529*2680e0c0SChristopher Ferris     supplement at http://hackersdelight.org/). Many of these are
2530*2680e0c0SChristopher Ferris     intended to reduce the branchiness of paths through malloc etc, as
2531*2680e0c0SChristopher Ferris     well as to reduce the number of memory locations read or written.
2532*2680e0c0SChristopher Ferris 
2533*2680e0c0SChristopher Ferris   Segments
2534*2680e0c0SChristopher Ferris     A list of segments headed by an embedded malloc_segment record
2535*2680e0c0SChristopher Ferris     representing the initial space.
2536*2680e0c0SChristopher Ferris 
2537*2680e0c0SChristopher Ferris   Address check support
2538*2680e0c0SChristopher Ferris     The least_addr field is the least address ever obtained from
2539*2680e0c0SChristopher Ferris     MORECORE or MMAP. Attempted frees and reallocs of any address less
2540*2680e0c0SChristopher Ferris     than this are trapped (unless INSECURE is defined).
2541*2680e0c0SChristopher Ferris 
2542*2680e0c0SChristopher Ferris   Magic tag
2543*2680e0c0SChristopher Ferris     A cross-check field that should always hold same value as mparams.magic.
2544*2680e0c0SChristopher Ferris 
2545*2680e0c0SChristopher Ferris   Max allowed footprint
2546*2680e0c0SChristopher Ferris     The maximum allowed bytes to allocate from system (zero means no limit)
2547*2680e0c0SChristopher Ferris 
2548*2680e0c0SChristopher Ferris   Flags
2549*2680e0c0SChristopher Ferris     Bits recording whether to use MMAP, locks, or contiguous MORECORE
2550*2680e0c0SChristopher Ferris 
2551*2680e0c0SChristopher Ferris   Statistics
2552*2680e0c0SChristopher Ferris     Each space keeps track of current and maximum system memory
2553*2680e0c0SChristopher Ferris     obtained via MORECORE or MMAP.
2554*2680e0c0SChristopher Ferris 
2555*2680e0c0SChristopher Ferris   Trim support
2556*2680e0c0SChristopher Ferris     Fields holding the amount of unused topmost memory that should trigger
2557*2680e0c0SChristopher Ferris     trimming, and a counter to force periodic scanning to release unused
2558*2680e0c0SChristopher Ferris     non-topmost segments.
2559*2680e0c0SChristopher Ferris 
2560*2680e0c0SChristopher Ferris   Locking
2561*2680e0c0SChristopher Ferris     If USE_LOCKS is defined, the "mutex" lock is acquired and released
2562*2680e0c0SChristopher Ferris     around every public call using this mspace.
2563*2680e0c0SChristopher Ferris 
2564*2680e0c0SChristopher Ferris   Extension support
2565*2680e0c0SChristopher Ferris     A void* pointer and a size_t field that can be used to help implement
2566*2680e0c0SChristopher Ferris     extensions to this malloc.
2567*2680e0c0SChristopher Ferris */
2568*2680e0c0SChristopher Ferris 
2569*2680e0c0SChristopher Ferris /* Bin types, widths and sizes */
2570*2680e0c0SChristopher Ferris #define NSMALLBINS        (32U)
2571*2680e0c0SChristopher Ferris #define NTREEBINS         (32U)
2572*2680e0c0SChristopher Ferris #define SMALLBIN_SHIFT    (3U)
2573*2680e0c0SChristopher Ferris #define SMALLBIN_WIDTH    (SIZE_T_ONE << SMALLBIN_SHIFT)
2574*2680e0c0SChristopher Ferris #define TREEBIN_SHIFT     (8U)
2575*2680e0c0SChristopher Ferris #define MIN_LARGE_SIZE    (SIZE_T_ONE << TREEBIN_SHIFT)
2576*2680e0c0SChristopher Ferris #define MAX_SMALL_SIZE    (MIN_LARGE_SIZE - SIZE_T_ONE)
2577*2680e0c0SChristopher Ferris #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
2578*2680e0c0SChristopher Ferris 
2579*2680e0c0SChristopher Ferris struct malloc_state {
2580*2680e0c0SChristopher Ferris   binmap_t   smallmap;
2581*2680e0c0SChristopher Ferris   binmap_t   treemap;
2582*2680e0c0SChristopher Ferris   size_t     dvsize;
2583*2680e0c0SChristopher Ferris   size_t     topsize;
2584*2680e0c0SChristopher Ferris   char*      least_addr;
2585*2680e0c0SChristopher Ferris   mchunkptr  dv;
2586*2680e0c0SChristopher Ferris   mchunkptr  top;
2587*2680e0c0SChristopher Ferris   size_t     trim_check;
2588*2680e0c0SChristopher Ferris   size_t     release_checks;
2589*2680e0c0SChristopher Ferris   size_t     magic;
2590*2680e0c0SChristopher Ferris   mchunkptr  smallbins[(NSMALLBINS+1)*2];
2591*2680e0c0SChristopher Ferris   tbinptr    treebins[NTREEBINS];
2592*2680e0c0SChristopher Ferris   size_t     footprint;
2593*2680e0c0SChristopher Ferris   size_t     max_footprint;
2594*2680e0c0SChristopher Ferris   size_t     footprint_limit; /* zero means no limit */
2595*2680e0c0SChristopher Ferris   flag_t     mflags;
2596*2680e0c0SChristopher Ferris #if USE_LOCKS
2597*2680e0c0SChristopher Ferris   MLOCK_T    mutex;     /* locate lock among fields that rarely change */
2598*2680e0c0SChristopher Ferris #endif /* USE_LOCKS */
2599*2680e0c0SChristopher Ferris   msegment   seg;
2600*2680e0c0SChristopher Ferris   void*      extp;      /* Unused but available for extensions */
2601*2680e0c0SChristopher Ferris   size_t     exts;
2602*2680e0c0SChristopher Ferris };
2603*2680e0c0SChristopher Ferris 
2604*2680e0c0SChristopher Ferris typedef struct malloc_state*    mstate;
2605*2680e0c0SChristopher Ferris 
2606*2680e0c0SChristopher Ferris /* ------------- Global malloc_state and malloc_params ------------------- */
2607*2680e0c0SChristopher Ferris 
2608*2680e0c0SChristopher Ferris /*
2609*2680e0c0SChristopher Ferris   malloc_params holds global properties, including those that can be
2610*2680e0c0SChristopher Ferris   dynamically set using mallopt. There is a single instance, mparams,
2611*2680e0c0SChristopher Ferris   initialized in init_mparams. Note that the non-zeroness of "magic"
2612*2680e0c0SChristopher Ferris   also serves as an initialization flag.
2613*2680e0c0SChristopher Ferris */
2614*2680e0c0SChristopher Ferris 
2615*2680e0c0SChristopher Ferris struct malloc_params {
2616*2680e0c0SChristopher Ferris   size_t magic;
2617*2680e0c0SChristopher Ferris   size_t page_size;
2618*2680e0c0SChristopher Ferris   size_t granularity;
2619*2680e0c0SChristopher Ferris   size_t mmap_threshold;
2620*2680e0c0SChristopher Ferris   size_t trim_threshold;
2621*2680e0c0SChristopher Ferris   flag_t default_mflags;
2622*2680e0c0SChristopher Ferris };
2623*2680e0c0SChristopher Ferris 
2624*2680e0c0SChristopher Ferris static struct malloc_params mparams;
2625*2680e0c0SChristopher Ferris 
2626*2680e0c0SChristopher Ferris /* Ensure mparams initialized */
2627*2680e0c0SChristopher Ferris #define ensure_initialization() (void)(mparams.magic != 0 || init_mparams())
2628*2680e0c0SChristopher Ferris 
2629*2680e0c0SChristopher Ferris #if !ONLY_MSPACES
2630*2680e0c0SChristopher Ferris 
2631*2680e0c0SChristopher Ferris /* The global malloc_state used for all non-"mspace" calls */
2632*2680e0c0SChristopher Ferris static struct malloc_state _gm_;
2633*2680e0c0SChristopher Ferris #define gm                 (&_gm_)
2634*2680e0c0SChristopher Ferris #define is_global(M)       ((M) == &_gm_)
2635*2680e0c0SChristopher Ferris 
2636*2680e0c0SChristopher Ferris #endif /* !ONLY_MSPACES */
2637*2680e0c0SChristopher Ferris 
2638*2680e0c0SChristopher Ferris #define is_initialized(M)  ((M)->top != 0)
2639*2680e0c0SChristopher Ferris 
2640*2680e0c0SChristopher Ferris /* -------------------------- system alloc setup ------------------------- */
2641*2680e0c0SChristopher Ferris 
2642*2680e0c0SChristopher Ferris /* Operations on mflags */
2643*2680e0c0SChristopher Ferris 
2644*2680e0c0SChristopher Ferris #define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
2645*2680e0c0SChristopher Ferris #define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
2646*2680e0c0SChristopher Ferris #if USE_LOCKS
2647*2680e0c0SChristopher Ferris #define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
2648*2680e0c0SChristopher Ferris #else
2649*2680e0c0SChristopher Ferris #define disable_lock(M)
2650*2680e0c0SChristopher Ferris #endif
2651*2680e0c0SChristopher Ferris 
2652*2680e0c0SChristopher Ferris #define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
2653*2680e0c0SChristopher Ferris #define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
2654*2680e0c0SChristopher Ferris #if HAVE_MMAP
2655*2680e0c0SChristopher Ferris #define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
2656*2680e0c0SChristopher Ferris #else
2657*2680e0c0SChristopher Ferris #define disable_mmap(M)
2658*2680e0c0SChristopher Ferris #endif
2659*2680e0c0SChristopher Ferris 
2660*2680e0c0SChristopher Ferris #define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
2661*2680e0c0SChristopher Ferris #define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
2662*2680e0c0SChristopher Ferris 
2663*2680e0c0SChristopher Ferris #define set_lock(M,L)\
2664*2680e0c0SChristopher Ferris  ((M)->mflags = (L)?\
2665*2680e0c0SChristopher Ferris   ((M)->mflags | USE_LOCK_BIT) :\
2666*2680e0c0SChristopher Ferris   ((M)->mflags & ~USE_LOCK_BIT))
2667*2680e0c0SChristopher Ferris 
2668*2680e0c0SChristopher Ferris /* page-align a size */
2669*2680e0c0SChristopher Ferris #define page_align(S)\
2670*2680e0c0SChristopher Ferris  (((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE))
2671*2680e0c0SChristopher Ferris 
2672*2680e0c0SChristopher Ferris /* granularity-align a size */
2673*2680e0c0SChristopher Ferris #define granularity_align(S)\
2674*2680e0c0SChristopher Ferris   (((S) + (mparams.granularity - SIZE_T_ONE))\
2675*2680e0c0SChristopher Ferris    & ~(mparams.granularity - SIZE_T_ONE))
2676*2680e0c0SChristopher Ferris 
2677*2680e0c0SChristopher Ferris 
2678*2680e0c0SChristopher Ferris /* For mmap, use granularity alignment on windows, else page-align */
2679*2680e0c0SChristopher Ferris #ifdef WIN32
2680*2680e0c0SChristopher Ferris #define mmap_align(S) granularity_align(S)
2681*2680e0c0SChristopher Ferris #else
2682*2680e0c0SChristopher Ferris #define mmap_align(S) page_align(S)
2683*2680e0c0SChristopher Ferris #endif
2684*2680e0c0SChristopher Ferris 
2685*2680e0c0SChristopher Ferris /* For sys_alloc, enough padding to ensure can malloc request on success */
2686*2680e0c0SChristopher Ferris #define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT)
2687*2680e0c0SChristopher Ferris 
2688*2680e0c0SChristopher Ferris #define is_page_aligned(S)\
2689*2680e0c0SChristopher Ferris    (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2690*2680e0c0SChristopher Ferris #define is_granularity_aligned(S)\
2691*2680e0c0SChristopher Ferris    (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2692*2680e0c0SChristopher Ferris 
2693*2680e0c0SChristopher Ferris /*  True if segment S holds address A */
2694*2680e0c0SChristopher Ferris #define segment_holds(S, A)\
2695*2680e0c0SChristopher Ferris   ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2696*2680e0c0SChristopher Ferris 
2697*2680e0c0SChristopher Ferris /* Return segment holding given address */
segment_holding(mstate m,char * addr)2698*2680e0c0SChristopher Ferris static msegmentptr segment_holding(mstate m, char* addr) {
2699*2680e0c0SChristopher Ferris   msegmentptr sp = &m->seg;
2700*2680e0c0SChristopher Ferris   for (;;) {
2701*2680e0c0SChristopher Ferris     if (addr >= sp->base && addr < sp->base + sp->size)
2702*2680e0c0SChristopher Ferris       return sp;
2703*2680e0c0SChristopher Ferris     if ((sp = sp->next) == 0)
2704*2680e0c0SChristopher Ferris       return 0;
2705*2680e0c0SChristopher Ferris   }
2706*2680e0c0SChristopher Ferris }
2707*2680e0c0SChristopher Ferris 
2708*2680e0c0SChristopher Ferris /* Return true if segment contains a segment link */
has_segment_link(mstate m,msegmentptr ss)2709*2680e0c0SChristopher Ferris static int has_segment_link(mstate m, msegmentptr ss) {
2710*2680e0c0SChristopher Ferris   msegmentptr sp = &m->seg;
2711*2680e0c0SChristopher Ferris   for (;;) {
2712*2680e0c0SChristopher Ferris     if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
2713*2680e0c0SChristopher Ferris       return 1;
2714*2680e0c0SChristopher Ferris     if ((sp = sp->next) == 0)
2715*2680e0c0SChristopher Ferris       return 0;
2716*2680e0c0SChristopher Ferris   }
2717*2680e0c0SChristopher Ferris }
2718*2680e0c0SChristopher Ferris 
2719*2680e0c0SChristopher Ferris #ifndef MORECORE_CANNOT_TRIM
2720*2680e0c0SChristopher Ferris #define should_trim(M,s)  ((s) > (M)->trim_check)
2721*2680e0c0SChristopher Ferris #else  /* MORECORE_CANNOT_TRIM */
2722*2680e0c0SChristopher Ferris #define should_trim(M,s)  (0)
2723*2680e0c0SChristopher Ferris #endif /* MORECORE_CANNOT_TRIM */
2724*2680e0c0SChristopher Ferris 
2725*2680e0c0SChristopher Ferris /*
2726*2680e0c0SChristopher Ferris   TOP_FOOT_SIZE is padding at the end of a segment, including space
2727*2680e0c0SChristopher Ferris   that may be needed to place segment records and fenceposts when new
2728*2680e0c0SChristopher Ferris   noncontiguous segments are added.
2729*2680e0c0SChristopher Ferris */
2730*2680e0c0SChristopher Ferris #define TOP_FOOT_SIZE\
2731*2680e0c0SChristopher Ferris   (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2732*2680e0c0SChristopher Ferris 
2733*2680e0c0SChristopher Ferris 
2734*2680e0c0SChristopher Ferris /* -------------------------------  Hooks -------------------------------- */
2735*2680e0c0SChristopher Ferris 
2736*2680e0c0SChristopher Ferris /*
2737*2680e0c0SChristopher Ferris   PREACTION should be defined to return 0 on success, and nonzero on
2738*2680e0c0SChristopher Ferris   failure. If you are not using locking, you can redefine these to do
2739*2680e0c0SChristopher Ferris   anything you like.
2740*2680e0c0SChristopher Ferris */
2741*2680e0c0SChristopher Ferris 
2742*2680e0c0SChristopher Ferris #if USE_LOCKS
2743*2680e0c0SChristopher Ferris #define PREACTION(M)  ((use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2744*2680e0c0SChristopher Ferris #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2745*2680e0c0SChristopher Ferris #else /* USE_LOCKS */
2746*2680e0c0SChristopher Ferris 
2747*2680e0c0SChristopher Ferris #ifndef PREACTION
2748*2680e0c0SChristopher Ferris #define PREACTION(M) (0)
2749*2680e0c0SChristopher Ferris #endif  /* PREACTION */
2750*2680e0c0SChristopher Ferris 
2751*2680e0c0SChristopher Ferris #ifndef POSTACTION
2752*2680e0c0SChristopher Ferris #define POSTACTION(M)
2753*2680e0c0SChristopher Ferris #endif  /* POSTACTION */
2754*2680e0c0SChristopher Ferris 
2755*2680e0c0SChristopher Ferris #endif /* USE_LOCKS */
2756*2680e0c0SChristopher Ferris 
2757*2680e0c0SChristopher Ferris /*
2758*2680e0c0SChristopher Ferris   CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2759*2680e0c0SChristopher Ferris   USAGE_ERROR_ACTION is triggered on detected bad frees and
2760*2680e0c0SChristopher Ferris   reallocs. The argument p is an address that might have triggered the
2761*2680e0c0SChristopher Ferris   fault. It is ignored by the two predefined actions, but might be
2762*2680e0c0SChristopher Ferris   useful in custom actions that try to help diagnose errors.
2763*2680e0c0SChristopher Ferris */
2764*2680e0c0SChristopher Ferris 
2765*2680e0c0SChristopher Ferris #if PROCEED_ON_ERROR
2766*2680e0c0SChristopher Ferris 
2767*2680e0c0SChristopher Ferris /* A count of the number of corruption errors causing resets */
2768*2680e0c0SChristopher Ferris int malloc_corruption_error_count;
2769*2680e0c0SChristopher Ferris 
2770*2680e0c0SChristopher Ferris /* default corruption action */
2771*2680e0c0SChristopher Ferris static void reset_on_error(mstate m);
2772*2680e0c0SChristopher Ferris 
2773*2680e0c0SChristopher Ferris #define CORRUPTION_ERROR_ACTION(m)  reset_on_error(m)
2774*2680e0c0SChristopher Ferris #define USAGE_ERROR_ACTION(m, p)
2775*2680e0c0SChristopher Ferris 
2776*2680e0c0SChristopher Ferris #else /* PROCEED_ON_ERROR */
2777*2680e0c0SChristopher Ferris 
2778*2680e0c0SChristopher Ferris #ifndef CORRUPTION_ERROR_ACTION
2779*2680e0c0SChristopher Ferris #define CORRUPTION_ERROR_ACTION(m) ABORT
2780*2680e0c0SChristopher Ferris #endif /* CORRUPTION_ERROR_ACTION */
2781*2680e0c0SChristopher Ferris 
2782*2680e0c0SChristopher Ferris #ifndef USAGE_ERROR_ACTION
2783*2680e0c0SChristopher Ferris #define USAGE_ERROR_ACTION(m,p) ABORT
2784*2680e0c0SChristopher Ferris #endif /* USAGE_ERROR_ACTION */
2785*2680e0c0SChristopher Ferris 
2786*2680e0c0SChristopher Ferris #endif /* PROCEED_ON_ERROR */
2787*2680e0c0SChristopher Ferris 
2788*2680e0c0SChristopher Ferris 
2789*2680e0c0SChristopher Ferris /* -------------------------- Debugging setup ---------------------------- */
2790*2680e0c0SChristopher Ferris 
2791*2680e0c0SChristopher Ferris #if ! DEBUG
2792*2680e0c0SChristopher Ferris 
2793*2680e0c0SChristopher Ferris #define check_free_chunk(M,P)
2794*2680e0c0SChristopher Ferris #define check_inuse_chunk(M,P)
2795*2680e0c0SChristopher Ferris #define check_malloced_chunk(M,P,N)
2796*2680e0c0SChristopher Ferris #define check_mmapped_chunk(M,P)
2797*2680e0c0SChristopher Ferris #define check_malloc_state(M)
2798*2680e0c0SChristopher Ferris #define check_top_chunk(M,P)
2799*2680e0c0SChristopher Ferris 
2800*2680e0c0SChristopher Ferris #else /* DEBUG */
2801*2680e0c0SChristopher Ferris #define check_free_chunk(M,P)       do_check_free_chunk(M,P)
2802*2680e0c0SChristopher Ferris #define check_inuse_chunk(M,P)      do_check_inuse_chunk(M,P)
2803*2680e0c0SChristopher Ferris #define check_top_chunk(M,P)        do_check_top_chunk(M,P)
2804*2680e0c0SChristopher Ferris #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2805*2680e0c0SChristopher Ferris #define check_mmapped_chunk(M,P)    do_check_mmapped_chunk(M,P)
2806*2680e0c0SChristopher Ferris #define check_malloc_state(M)       do_check_malloc_state(M)
2807*2680e0c0SChristopher Ferris 
2808*2680e0c0SChristopher Ferris static void   do_check_any_chunk(mstate m, mchunkptr p);
2809*2680e0c0SChristopher Ferris static void   do_check_top_chunk(mstate m, mchunkptr p);
2810*2680e0c0SChristopher Ferris static void   do_check_mmapped_chunk(mstate m, mchunkptr p);
2811*2680e0c0SChristopher Ferris static void   do_check_inuse_chunk(mstate m, mchunkptr p);
2812*2680e0c0SChristopher Ferris static void   do_check_free_chunk(mstate m, mchunkptr p);
2813*2680e0c0SChristopher Ferris static void   do_check_malloced_chunk(mstate m, void* mem, size_t s);
2814*2680e0c0SChristopher Ferris static void   do_check_tree(mstate m, tchunkptr t);
2815*2680e0c0SChristopher Ferris static void   do_check_treebin(mstate m, bindex_t i);
2816*2680e0c0SChristopher Ferris static void   do_check_smallbin(mstate m, bindex_t i);
2817*2680e0c0SChristopher Ferris static void   do_check_malloc_state(mstate m);
2818*2680e0c0SChristopher Ferris static int    bin_find(mstate m, mchunkptr x);
2819*2680e0c0SChristopher Ferris static size_t traverse_and_check(mstate m);
2820*2680e0c0SChristopher Ferris #endif /* DEBUG */
2821*2680e0c0SChristopher Ferris 
2822*2680e0c0SChristopher Ferris /* ---------------------------- Indexing Bins ---------------------------- */
2823*2680e0c0SChristopher Ferris 
2824*2680e0c0SChristopher Ferris #define is_small(s)         (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2825*2680e0c0SChristopher Ferris #define small_index(s)      (bindex_t)((s)  >> SMALLBIN_SHIFT)
2826*2680e0c0SChristopher Ferris #define small_index2size(i) ((i)  << SMALLBIN_SHIFT)
2827*2680e0c0SChristopher Ferris #define MIN_SMALL_INDEX     (small_index(MIN_CHUNK_SIZE))
2828*2680e0c0SChristopher Ferris 
2829*2680e0c0SChristopher Ferris /* addressing by index. See above about smallbin repositioning */
2830*2680e0c0SChristopher Ferris /* BEGIN android-changed: strict aliasing change: char* cast to void* */
2831*2680e0c0SChristopher Ferris #define smallbin_at(M, i)   ((sbinptr)((void*)&((M)->smallbins[(i)<<1])))
2832*2680e0c0SChristopher Ferris /* END android-changed */
2833*2680e0c0SChristopher Ferris #define treebin_at(M,i)     (&((M)->treebins[i]))
2834*2680e0c0SChristopher Ferris 
2835*2680e0c0SChristopher Ferris /* assign tree index for size S to variable I. Use x86 asm if possible  */
2836*2680e0c0SChristopher Ferris #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2837*2680e0c0SChristopher Ferris #define compute_tree_index(S, I)\
2838*2680e0c0SChristopher Ferris {\
2839*2680e0c0SChristopher Ferris   unsigned int X = S >> TREEBIN_SHIFT;\
2840*2680e0c0SChristopher Ferris   if (X == 0)\
2841*2680e0c0SChristopher Ferris     I = 0;\
2842*2680e0c0SChristopher Ferris   else if (X > 0xFFFF)\
2843*2680e0c0SChristopher Ferris     I = NTREEBINS-1;\
2844*2680e0c0SChristopher Ferris   else {\
2845*2680e0c0SChristopher Ferris     unsigned int K = (unsigned) sizeof(X)*__CHAR_BIT__ - 1 - (unsigned) __builtin_clz(X); \
2846*2680e0c0SChristopher Ferris     I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2847*2680e0c0SChristopher Ferris   }\
2848*2680e0c0SChristopher Ferris }
2849*2680e0c0SChristopher Ferris 
2850*2680e0c0SChristopher Ferris #elif defined (__INTEL_COMPILER)
2851*2680e0c0SChristopher Ferris #define compute_tree_index(S, I)\
2852*2680e0c0SChristopher Ferris {\
2853*2680e0c0SChristopher Ferris   size_t X = S >> TREEBIN_SHIFT;\
2854*2680e0c0SChristopher Ferris   if (X == 0)\
2855*2680e0c0SChristopher Ferris     I = 0;\
2856*2680e0c0SChristopher Ferris   else if (X > 0xFFFF)\
2857*2680e0c0SChristopher Ferris     I = NTREEBINS-1;\
2858*2680e0c0SChristopher Ferris   else {\
2859*2680e0c0SChristopher Ferris     unsigned int K = _bit_scan_reverse (X); \
2860*2680e0c0SChristopher Ferris     I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2861*2680e0c0SChristopher Ferris   }\
2862*2680e0c0SChristopher Ferris }
2863*2680e0c0SChristopher Ferris 
2864*2680e0c0SChristopher Ferris #elif defined(_MSC_VER) && _MSC_VER>=1300
2865*2680e0c0SChristopher Ferris #define compute_tree_index(S, I)\
2866*2680e0c0SChristopher Ferris {\
2867*2680e0c0SChristopher Ferris   size_t X = S >> TREEBIN_SHIFT;\
2868*2680e0c0SChristopher Ferris   if (X == 0)\
2869*2680e0c0SChristopher Ferris     I = 0;\
2870*2680e0c0SChristopher Ferris   else if (X > 0xFFFF)\
2871*2680e0c0SChristopher Ferris     I = NTREEBINS-1;\
2872*2680e0c0SChristopher Ferris   else {\
2873*2680e0c0SChristopher Ferris     unsigned int K;\
2874*2680e0c0SChristopher Ferris     _BitScanReverse((DWORD *) &K, (DWORD) X);\
2875*2680e0c0SChristopher Ferris     I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2876*2680e0c0SChristopher Ferris   }\
2877*2680e0c0SChristopher Ferris }
2878*2680e0c0SChristopher Ferris 
2879*2680e0c0SChristopher Ferris #else /* GNUC */
2880*2680e0c0SChristopher Ferris #define compute_tree_index(S, I)\
2881*2680e0c0SChristopher Ferris {\
2882*2680e0c0SChristopher Ferris   size_t X = S >> TREEBIN_SHIFT;\
2883*2680e0c0SChristopher Ferris   if (X == 0)\
2884*2680e0c0SChristopher Ferris     I = 0;\
2885*2680e0c0SChristopher Ferris   else if (X > 0xFFFF)\
2886*2680e0c0SChristopher Ferris     I = NTREEBINS-1;\
2887*2680e0c0SChristopher Ferris   else {\
2888*2680e0c0SChristopher Ferris     unsigned int Y = (unsigned int)X;\
2889*2680e0c0SChristopher Ferris     unsigned int N = ((Y - 0x100) >> 16) & 8;\
2890*2680e0c0SChristopher Ferris     unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2891*2680e0c0SChristopher Ferris     N += K;\
2892*2680e0c0SChristopher Ferris     N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2893*2680e0c0SChristopher Ferris     K = 14 - N + ((Y <<= K) >> 15);\
2894*2680e0c0SChristopher Ferris     I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2895*2680e0c0SChristopher Ferris   }\
2896*2680e0c0SChristopher Ferris }
2897*2680e0c0SChristopher Ferris #endif /* GNUC */
2898*2680e0c0SChristopher Ferris 
2899*2680e0c0SChristopher Ferris /* Bit representing maximum resolved size in a treebin at i */
2900*2680e0c0SChristopher Ferris #define bit_for_tree_index(i) \
2901*2680e0c0SChristopher Ferris    (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2902*2680e0c0SChristopher Ferris 
2903*2680e0c0SChristopher Ferris /* Shift placing maximum resolved bit in a treebin at i as sign bit */
2904*2680e0c0SChristopher Ferris #define leftshift_for_tree_index(i) \
2905*2680e0c0SChristopher Ferris    ((i == NTREEBINS-1)? 0 : \
2906*2680e0c0SChristopher Ferris     ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2907*2680e0c0SChristopher Ferris 
2908*2680e0c0SChristopher Ferris /* The size of the smallest chunk held in bin with index i */
2909*2680e0c0SChristopher Ferris #define minsize_for_tree_index(i) \
2910*2680e0c0SChristopher Ferris    ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) |  \
2911*2680e0c0SChristopher Ferris    (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2912*2680e0c0SChristopher Ferris 
2913*2680e0c0SChristopher Ferris 
2914*2680e0c0SChristopher Ferris /* ------------------------ Operations on bin maps ----------------------- */
2915*2680e0c0SChristopher Ferris 
2916*2680e0c0SChristopher Ferris /* bit corresponding to given index */
2917*2680e0c0SChristopher Ferris #define idx2bit(i)              ((binmap_t)(1) << (i))
2918*2680e0c0SChristopher Ferris 
2919*2680e0c0SChristopher Ferris /* Mark/Clear bits with given index */
2920*2680e0c0SChristopher Ferris #define mark_smallmap(M,i)      ((M)->smallmap |=  idx2bit(i))
2921*2680e0c0SChristopher Ferris #define clear_smallmap(M,i)     ((M)->smallmap &= ~idx2bit(i))
2922*2680e0c0SChristopher Ferris #define smallmap_is_marked(M,i) ((M)->smallmap &   idx2bit(i))
2923*2680e0c0SChristopher Ferris 
2924*2680e0c0SChristopher Ferris #define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
2925*2680e0c0SChristopher Ferris #define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
2926*2680e0c0SChristopher Ferris #define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
2927*2680e0c0SChristopher Ferris 
2928*2680e0c0SChristopher Ferris /* isolate the least set bit of a bitmap */
2929*2680e0c0SChristopher Ferris #define least_bit(x)         ((x) & -(x))
2930*2680e0c0SChristopher Ferris 
2931*2680e0c0SChristopher Ferris /* mask with all bits to left of least bit of x on */
2932*2680e0c0SChristopher Ferris #define left_bits(x)         ((x<<1) | -(x<<1))
2933*2680e0c0SChristopher Ferris 
2934*2680e0c0SChristopher Ferris /* mask with all bits to left of or equal to least bit of x on */
2935*2680e0c0SChristopher Ferris #define same_or_left_bits(x) ((x) | -(x))
2936*2680e0c0SChristopher Ferris 
2937*2680e0c0SChristopher Ferris /* index corresponding to given bit. Use x86 asm if possible */
2938*2680e0c0SChristopher Ferris 
2939*2680e0c0SChristopher Ferris #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2940*2680e0c0SChristopher Ferris #define compute_bit2idx(X, I)\
2941*2680e0c0SChristopher Ferris {\
2942*2680e0c0SChristopher Ferris   unsigned int J;\
2943*2680e0c0SChristopher Ferris   J = __builtin_ctz(X); \
2944*2680e0c0SChristopher Ferris   I = (bindex_t)J;\
2945*2680e0c0SChristopher Ferris }
2946*2680e0c0SChristopher Ferris 
2947*2680e0c0SChristopher Ferris #elif defined (__INTEL_COMPILER)
2948*2680e0c0SChristopher Ferris #define compute_bit2idx(X, I)\
2949*2680e0c0SChristopher Ferris {\
2950*2680e0c0SChristopher Ferris   unsigned int J;\
2951*2680e0c0SChristopher Ferris   J = _bit_scan_forward (X); \
2952*2680e0c0SChristopher Ferris   I = (bindex_t)J;\
2953*2680e0c0SChristopher Ferris }
2954*2680e0c0SChristopher Ferris 
2955*2680e0c0SChristopher Ferris #elif defined(_MSC_VER) && _MSC_VER>=1300
2956*2680e0c0SChristopher Ferris #define compute_bit2idx(X, I)\
2957*2680e0c0SChristopher Ferris {\
2958*2680e0c0SChristopher Ferris   unsigned int J;\
2959*2680e0c0SChristopher Ferris   _BitScanForward((DWORD *) &J, X);\
2960*2680e0c0SChristopher Ferris   I = (bindex_t)J;\
2961*2680e0c0SChristopher Ferris }
2962*2680e0c0SChristopher Ferris 
2963*2680e0c0SChristopher Ferris #elif USE_BUILTIN_FFS
2964*2680e0c0SChristopher Ferris #define compute_bit2idx(X, I) I = ffs(X)-1
2965*2680e0c0SChristopher Ferris 
2966*2680e0c0SChristopher Ferris #else
2967*2680e0c0SChristopher Ferris #define compute_bit2idx(X, I)\
2968*2680e0c0SChristopher Ferris {\
2969*2680e0c0SChristopher Ferris   unsigned int Y = X - 1;\
2970*2680e0c0SChristopher Ferris   unsigned int K = Y >> (16-4) & 16;\
2971*2680e0c0SChristopher Ferris   unsigned int N = K;        Y >>= K;\
2972*2680e0c0SChristopher Ferris   N += K = Y >> (8-3) &  8;  Y >>= K;\
2973*2680e0c0SChristopher Ferris   N += K = Y >> (4-2) &  4;  Y >>= K;\
2974*2680e0c0SChristopher Ferris   N += K = Y >> (2-1) &  2;  Y >>= K;\
2975*2680e0c0SChristopher Ferris   N += K = Y >> (1-0) &  1;  Y >>= K;\
2976*2680e0c0SChristopher Ferris   I = (bindex_t)(N + Y);\
2977*2680e0c0SChristopher Ferris }
2978*2680e0c0SChristopher Ferris #endif /* GNUC */
2979*2680e0c0SChristopher Ferris 
2980*2680e0c0SChristopher Ferris 
2981*2680e0c0SChristopher Ferris /* ----------------------- Runtime Check Support ------------------------- */
2982*2680e0c0SChristopher Ferris 
2983*2680e0c0SChristopher Ferris /*
2984*2680e0c0SChristopher Ferris   For security, the main invariant is that malloc/free/etc never
2985*2680e0c0SChristopher Ferris   writes to a static address other than malloc_state, unless static
2986*2680e0c0SChristopher Ferris   malloc_state itself has been corrupted, which cannot occur via
2987*2680e0c0SChristopher Ferris   malloc (because of these checks). In essence this means that we
2988*2680e0c0SChristopher Ferris   believe all pointers, sizes, maps etc held in malloc_state, but
2989*2680e0c0SChristopher Ferris   check all of those linked or offsetted from other embedded data
2990*2680e0c0SChristopher Ferris   structures.  These checks are interspersed with main code in a way
2991*2680e0c0SChristopher Ferris   that tends to minimize their run-time cost.
2992*2680e0c0SChristopher Ferris 
2993*2680e0c0SChristopher Ferris   When FOOTERS is defined, in addition to range checking, we also
2994*2680e0c0SChristopher Ferris   verify footer fields of inuse chunks, which can be used guarantee
2995*2680e0c0SChristopher Ferris   that the mstate controlling malloc/free is intact.  This is a
2996*2680e0c0SChristopher Ferris   streamlined version of the approach described by William Robertson
2997*2680e0c0SChristopher Ferris   et al in "Run-time Detection of Heap-based Overflows" LISA'03
2998*2680e0c0SChristopher Ferris   http://www.usenix.org/events/lisa03/tech/robertson.html The footer
2999*2680e0c0SChristopher Ferris   of an inuse chunk holds the xor of its mstate and a random seed,
3000*2680e0c0SChristopher Ferris   that is checked upon calls to free() and realloc().  This is
3001*2680e0c0SChristopher Ferris   (probabalistically) unguessable from outside the program, but can be
3002*2680e0c0SChristopher Ferris   computed by any code successfully malloc'ing any chunk, so does not
3003*2680e0c0SChristopher Ferris   itself provide protection against code that has already broken
3004*2680e0c0SChristopher Ferris   security through some other means.  Unlike Robertson et al, we
3005*2680e0c0SChristopher Ferris   always dynamically check addresses of all offset chunks (previous,
3006*2680e0c0SChristopher Ferris   next, etc). This turns out to be cheaper than relying on hashes.
3007*2680e0c0SChristopher Ferris */
3008*2680e0c0SChristopher Ferris 
3009*2680e0c0SChristopher Ferris #if !INSECURE
3010*2680e0c0SChristopher Ferris /* Check if address a is at least as high as any from MORECORE or MMAP */
3011*2680e0c0SChristopher Ferris #define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
3012*2680e0c0SChristopher Ferris /* Check if address of next chunk n is higher than base chunk p */
3013*2680e0c0SChristopher Ferris #define ok_next(p, n)    ((char*)(p) < (char*)(n))
3014*2680e0c0SChristopher Ferris /* Check if p has inuse status */
3015*2680e0c0SChristopher Ferris #define ok_inuse(p)     is_inuse(p)
3016*2680e0c0SChristopher Ferris /* Check if p has its pinuse bit on */
3017*2680e0c0SChristopher Ferris #define ok_pinuse(p)     pinuse(p)
3018*2680e0c0SChristopher Ferris 
3019*2680e0c0SChristopher Ferris #else /* !INSECURE */
3020*2680e0c0SChristopher Ferris #define ok_address(M, a) (1)
3021*2680e0c0SChristopher Ferris #define ok_next(b, n)    (1)
3022*2680e0c0SChristopher Ferris #define ok_inuse(p)      (1)
3023*2680e0c0SChristopher Ferris #define ok_pinuse(p)     (1)
3024*2680e0c0SChristopher Ferris #endif /* !INSECURE */
3025*2680e0c0SChristopher Ferris 
3026*2680e0c0SChristopher Ferris #if (FOOTERS && !INSECURE)
3027*2680e0c0SChristopher Ferris /* Check if (alleged) mstate m has expected magic field */
3028*2680e0c0SChristopher Ferris #define ok_magic(M)      ((M)->magic == mparams.magic)
3029*2680e0c0SChristopher Ferris #else  /* (FOOTERS && !INSECURE) */
3030*2680e0c0SChristopher Ferris #define ok_magic(M)      (1)
3031*2680e0c0SChristopher Ferris #endif /* (FOOTERS && !INSECURE) */
3032*2680e0c0SChristopher Ferris 
3033*2680e0c0SChristopher Ferris /* In gcc, use __builtin_expect to minimize impact of checks */
3034*2680e0c0SChristopher Ferris #if !INSECURE
3035*2680e0c0SChristopher Ferris #if defined(__GNUC__) && __GNUC__ >= 3
3036*2680e0c0SChristopher Ferris #define RTCHECK(e)  __builtin_expect(e, 1)
3037*2680e0c0SChristopher Ferris #else /* GNUC */
3038*2680e0c0SChristopher Ferris #define RTCHECK(e)  (e)
3039*2680e0c0SChristopher Ferris #endif /* GNUC */
3040*2680e0c0SChristopher Ferris #else /* !INSECURE */
3041*2680e0c0SChristopher Ferris #define RTCHECK(e)  (1)
3042*2680e0c0SChristopher Ferris #endif /* !INSECURE */
3043*2680e0c0SChristopher Ferris 
3044*2680e0c0SChristopher Ferris /* macros to set up inuse chunks with or without footers */
3045*2680e0c0SChristopher Ferris 
3046*2680e0c0SChristopher Ferris #if !FOOTERS
3047*2680e0c0SChristopher Ferris 
3048*2680e0c0SChristopher Ferris #define mark_inuse_foot(M,p,s)
3049*2680e0c0SChristopher Ferris 
3050*2680e0c0SChristopher Ferris /* Macros for setting head/foot of non-mmapped chunks */
3051*2680e0c0SChristopher Ferris 
3052*2680e0c0SChristopher Ferris /* Set cinuse bit and pinuse bit of next chunk */
3053*2680e0c0SChristopher Ferris #define set_inuse(M,p,s)\
3054*2680e0c0SChristopher Ferris   ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
3055*2680e0c0SChristopher Ferris   ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
3056*2680e0c0SChristopher Ferris 
3057*2680e0c0SChristopher Ferris /* Set cinuse and pinuse of this chunk and pinuse of next chunk */
3058*2680e0c0SChristopher Ferris #define set_inuse_and_pinuse(M,p,s)\
3059*2680e0c0SChristopher Ferris   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3060*2680e0c0SChristopher Ferris   ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
3061*2680e0c0SChristopher Ferris 
3062*2680e0c0SChristopher Ferris /* Set size, cinuse and pinuse bit of this chunk */
3063*2680e0c0SChristopher Ferris #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
3064*2680e0c0SChristopher Ferris   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
3065*2680e0c0SChristopher Ferris 
3066*2680e0c0SChristopher Ferris #else /* FOOTERS */
3067*2680e0c0SChristopher Ferris 
3068*2680e0c0SChristopher Ferris /* Set foot of inuse chunk to be xor of mstate and seed */
3069*2680e0c0SChristopher Ferris #define mark_inuse_foot(M,p,s)\
3070*2680e0c0SChristopher Ferris   (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
3071*2680e0c0SChristopher Ferris 
3072*2680e0c0SChristopher Ferris #define get_mstate_for(p)\
3073*2680e0c0SChristopher Ferris   ((mstate)(((mchunkptr)((char*)(p) +\
3074*2680e0c0SChristopher Ferris     (chunksize(p))))->prev_foot ^ mparams.magic))
3075*2680e0c0SChristopher Ferris 
3076*2680e0c0SChristopher Ferris #define set_inuse(M,p,s)\
3077*2680e0c0SChristopher Ferris   ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
3078*2680e0c0SChristopher Ferris   (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
3079*2680e0c0SChristopher Ferris   mark_inuse_foot(M,p,s))
3080*2680e0c0SChristopher Ferris 
3081*2680e0c0SChristopher Ferris #define set_inuse_and_pinuse(M,p,s)\
3082*2680e0c0SChristopher Ferris   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3083*2680e0c0SChristopher Ferris   (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
3084*2680e0c0SChristopher Ferris  mark_inuse_foot(M,p,s))
3085*2680e0c0SChristopher Ferris 
3086*2680e0c0SChristopher Ferris #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
3087*2680e0c0SChristopher Ferris   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3088*2680e0c0SChristopher Ferris   mark_inuse_foot(M, p, s))
3089*2680e0c0SChristopher Ferris 
3090*2680e0c0SChristopher Ferris #endif /* !FOOTERS */
3091*2680e0c0SChristopher Ferris 
3092*2680e0c0SChristopher Ferris /* ---------------------------- setting mparams -------------------------- */
3093*2680e0c0SChristopher Ferris 
3094*2680e0c0SChristopher Ferris #if LOCK_AT_FORK
pre_fork(void)3095*2680e0c0SChristopher Ferris static void pre_fork(void)         { ACQUIRE_LOCK(&(gm)->mutex); }
post_fork_parent(void)3096*2680e0c0SChristopher Ferris static void post_fork_parent(void) { RELEASE_LOCK(&(gm)->mutex); }
post_fork_child(void)3097*2680e0c0SChristopher Ferris static void post_fork_child(void)  { INITIAL_LOCK(&(gm)->mutex); }
3098*2680e0c0SChristopher Ferris #endif /* LOCK_AT_FORK */
3099*2680e0c0SChristopher Ferris 
3100*2680e0c0SChristopher Ferris /* Initialize mparams */
init_mparams(void)3101*2680e0c0SChristopher Ferris static int init_mparams(void) {
3102*2680e0c0SChristopher Ferris   /* BEGIN android-added: move pthread_atfork outside of lock */
3103*2680e0c0SChristopher Ferris   int first_run = 0;
3104*2680e0c0SChristopher Ferris   /* END android-added */
3105*2680e0c0SChristopher Ferris #ifdef NEED_GLOBAL_LOCK_INIT
3106*2680e0c0SChristopher Ferris   if (malloc_global_mutex_status <= 0)
3107*2680e0c0SChristopher Ferris     init_malloc_global_mutex();
3108*2680e0c0SChristopher Ferris #endif
3109*2680e0c0SChristopher Ferris 
3110*2680e0c0SChristopher Ferris   ACQUIRE_MALLOC_GLOBAL_LOCK();
3111*2680e0c0SChristopher Ferris   if (mparams.magic == 0) {
3112*2680e0c0SChristopher Ferris     size_t magic;
3113*2680e0c0SChristopher Ferris     size_t psize;
3114*2680e0c0SChristopher Ferris     size_t gsize;
3115*2680e0c0SChristopher Ferris     /* BEGIN android-added: move pthread_atfork outside of lock */
3116*2680e0c0SChristopher Ferris     first_run = 1;
3117*2680e0c0SChristopher Ferris     /* END android-added */
3118*2680e0c0SChristopher Ferris 
3119*2680e0c0SChristopher Ferris #ifndef WIN32
3120*2680e0c0SChristopher Ferris     psize = malloc_getpagesize;
3121*2680e0c0SChristopher Ferris     gsize = ((DEFAULT_GRANULARITY != 0)? DEFAULT_GRANULARITY : psize);
3122*2680e0c0SChristopher Ferris #else /* WIN32 */
3123*2680e0c0SChristopher Ferris     {
3124*2680e0c0SChristopher Ferris       SYSTEM_INFO system_info;
3125*2680e0c0SChristopher Ferris       GetSystemInfo(&system_info);
3126*2680e0c0SChristopher Ferris       psize = system_info.dwPageSize;
3127*2680e0c0SChristopher Ferris       gsize = ((DEFAULT_GRANULARITY != 0)?
3128*2680e0c0SChristopher Ferris                DEFAULT_GRANULARITY : system_info.dwAllocationGranularity);
3129*2680e0c0SChristopher Ferris     }
3130*2680e0c0SChristopher Ferris #endif /* WIN32 */
3131*2680e0c0SChristopher Ferris 
3132*2680e0c0SChristopher Ferris     /* Sanity-check configuration:
3133*2680e0c0SChristopher Ferris        size_t must be unsigned and as wide as pointer type.
3134*2680e0c0SChristopher Ferris        ints must be at least 4 bytes.
3135*2680e0c0SChristopher Ferris        alignment must be at least 8.
3136*2680e0c0SChristopher Ferris        Alignment, min chunk size, and page size must all be powers of 2.
3137*2680e0c0SChristopher Ferris     */
3138*2680e0c0SChristopher Ferris     if ((sizeof(size_t) != sizeof(char*)) ||
3139*2680e0c0SChristopher Ferris         (MAX_SIZE_T < MIN_CHUNK_SIZE)  ||
3140*2680e0c0SChristopher Ferris         (sizeof(int) < 4)  ||
3141*2680e0c0SChristopher Ferris         (MALLOC_ALIGNMENT < (size_t)8U) ||
3142*2680e0c0SChristopher Ferris         ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) ||
3143*2680e0c0SChristopher Ferris         ((MCHUNK_SIZE      & (MCHUNK_SIZE-SIZE_T_ONE))      != 0) ||
3144*2680e0c0SChristopher Ferris         ((gsize            & (gsize-SIZE_T_ONE))            != 0) ||
3145*2680e0c0SChristopher Ferris         ((psize            & (psize-SIZE_T_ONE))            != 0))
3146*2680e0c0SChristopher Ferris       ABORT;
3147*2680e0c0SChristopher Ferris     mparams.granularity = gsize;
3148*2680e0c0SChristopher Ferris     mparams.page_size = psize;
3149*2680e0c0SChristopher Ferris     mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
3150*2680e0c0SChristopher Ferris     mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
3151*2680e0c0SChristopher Ferris #if MORECORE_CONTIGUOUS
3152*2680e0c0SChristopher Ferris     mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
3153*2680e0c0SChristopher Ferris #else  /* MORECORE_CONTIGUOUS */
3154*2680e0c0SChristopher Ferris     mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
3155*2680e0c0SChristopher Ferris #endif /* MORECORE_CONTIGUOUS */
3156*2680e0c0SChristopher Ferris 
3157*2680e0c0SChristopher Ferris #if !ONLY_MSPACES
3158*2680e0c0SChristopher Ferris     /* Set up lock for main malloc area */
3159*2680e0c0SChristopher Ferris     gm->mflags = mparams.default_mflags;
3160*2680e0c0SChristopher Ferris     (void)INITIAL_LOCK(&gm->mutex);
3161*2680e0c0SChristopher Ferris #endif
3162*2680e0c0SChristopher Ferris     /* BEGIN android-removed: move pthread_atfork outside of lock */
3163*2680e0c0SChristopher Ferris #if 0 && LOCK_AT_FORK
3164*2680e0c0SChristopher Ferris     pthread_atfork(&pre_fork, &post_fork_parent, &post_fork_child);
3165*2680e0c0SChristopher Ferris #endif
3166*2680e0c0SChristopher Ferris     /* END android-removed */
3167*2680e0c0SChristopher Ferris 
3168*2680e0c0SChristopher Ferris     {
3169*2680e0c0SChristopher Ferris #if USE_DEV_RANDOM
3170*2680e0c0SChristopher Ferris       int fd;
3171*2680e0c0SChristopher Ferris       unsigned char buf[sizeof(size_t)];
3172*2680e0c0SChristopher Ferris       /* Try to use /dev/urandom, else fall back on using time */
3173*2680e0c0SChristopher Ferris       if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
3174*2680e0c0SChristopher Ferris           read(fd, buf, sizeof(buf)) == sizeof(buf)) {
3175*2680e0c0SChristopher Ferris         magic = *((size_t *) buf);
3176*2680e0c0SChristopher Ferris         close(fd);
3177*2680e0c0SChristopher Ferris       }
3178*2680e0c0SChristopher Ferris       else
3179*2680e0c0SChristopher Ferris #endif /* USE_DEV_RANDOM */
3180*2680e0c0SChristopher Ferris #ifdef WIN32
3181*2680e0c0SChristopher Ferris       magic = (size_t)(GetTickCount() ^ (size_t)0x55555555U);
3182*2680e0c0SChristopher Ferris #elif defined(LACKS_TIME_H)
3183*2680e0c0SChristopher Ferris       magic = (size_t)&magic ^ (size_t)0x55555555U;
3184*2680e0c0SChristopher Ferris #else
3185*2680e0c0SChristopher Ferris       magic = (size_t)(time(0) ^ (size_t)0x55555555U);
3186*2680e0c0SChristopher Ferris #endif
3187*2680e0c0SChristopher Ferris       magic |= (size_t)8U;    /* ensure nonzero */
3188*2680e0c0SChristopher Ferris       magic &= ~(size_t)7U;   /* improve chances of fault for bad values */
3189*2680e0c0SChristopher Ferris       /* Until memory modes commonly available, use volatile-write */
3190*2680e0c0SChristopher Ferris       (*(volatile size_t *)(&(mparams.magic))) = magic;
3191*2680e0c0SChristopher Ferris     }
3192*2680e0c0SChristopher Ferris   }
3193*2680e0c0SChristopher Ferris 
3194*2680e0c0SChristopher Ferris   RELEASE_MALLOC_GLOBAL_LOCK();
3195*2680e0c0SChristopher Ferris   /* BEGIN android-added: move pthread_atfork outside of lock */
3196*2680e0c0SChristopher Ferris   if (first_run != 0) {
3197*2680e0c0SChristopher Ferris #if LOCK_AT_FORK
3198*2680e0c0SChristopher Ferris     pthread_atfork(&pre_fork, &post_fork_parent, &post_fork_child);
3199*2680e0c0SChristopher Ferris #endif
3200*2680e0c0SChristopher Ferris   }
3201*2680e0c0SChristopher Ferris   /* END android-added */
3202*2680e0c0SChristopher Ferris   return 1;
3203*2680e0c0SChristopher Ferris }
3204*2680e0c0SChristopher Ferris 
3205*2680e0c0SChristopher Ferris /* support for mallopt */
change_mparam(int param_number,int value)3206*2680e0c0SChristopher Ferris static int change_mparam(int param_number, int value) {
3207*2680e0c0SChristopher Ferris   size_t val;
3208*2680e0c0SChristopher Ferris   ensure_initialization();
3209*2680e0c0SChristopher Ferris   val = (value == -1)? MAX_SIZE_T : (size_t)value;
3210*2680e0c0SChristopher Ferris   switch(param_number) {
3211*2680e0c0SChristopher Ferris   case M_TRIM_THRESHOLD:
3212*2680e0c0SChristopher Ferris     mparams.trim_threshold = val;
3213*2680e0c0SChristopher Ferris     return 1;
3214*2680e0c0SChristopher Ferris   case M_GRANULARITY:
3215*2680e0c0SChristopher Ferris     if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
3216*2680e0c0SChristopher Ferris       mparams.granularity = val;
3217*2680e0c0SChristopher Ferris       return 1;
3218*2680e0c0SChristopher Ferris     }
3219*2680e0c0SChristopher Ferris     else
3220*2680e0c0SChristopher Ferris       return 0;
3221*2680e0c0SChristopher Ferris   case M_MMAP_THRESHOLD:
3222*2680e0c0SChristopher Ferris     mparams.mmap_threshold = val;
3223*2680e0c0SChristopher Ferris     return 1;
3224*2680e0c0SChristopher Ferris   default:
3225*2680e0c0SChristopher Ferris     return 0;
3226*2680e0c0SChristopher Ferris   }
3227*2680e0c0SChristopher Ferris }
3228*2680e0c0SChristopher Ferris 
3229*2680e0c0SChristopher Ferris #if DEBUG
3230*2680e0c0SChristopher Ferris /* ------------------------- Debugging Support --------------------------- */
3231*2680e0c0SChristopher Ferris 
3232*2680e0c0SChristopher Ferris /* Check properties of any chunk, whether free, inuse, mmapped etc  */
do_check_any_chunk(mstate m,mchunkptr p)3233*2680e0c0SChristopher Ferris static void do_check_any_chunk(mstate m, mchunkptr p) {
3234*2680e0c0SChristopher Ferris   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3235*2680e0c0SChristopher Ferris   assert(ok_address(m, p));
3236*2680e0c0SChristopher Ferris }
3237*2680e0c0SChristopher Ferris 
3238*2680e0c0SChristopher Ferris /* Check properties of top chunk */
do_check_top_chunk(mstate m,mchunkptr p)3239*2680e0c0SChristopher Ferris static void do_check_top_chunk(mstate m, mchunkptr p) {
3240*2680e0c0SChristopher Ferris   msegmentptr sp = segment_holding(m, (char*)p);
3241*2680e0c0SChristopher Ferris   size_t  sz = p->head & ~INUSE_BITS; /* third-lowest bit can be set! */
3242*2680e0c0SChristopher Ferris   assert(sp != 0);
3243*2680e0c0SChristopher Ferris   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3244*2680e0c0SChristopher Ferris   assert(ok_address(m, p));
3245*2680e0c0SChristopher Ferris   assert(sz == m->topsize);
3246*2680e0c0SChristopher Ferris   assert(sz > 0);
3247*2680e0c0SChristopher Ferris   assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
3248*2680e0c0SChristopher Ferris   assert(pinuse(p));
3249*2680e0c0SChristopher Ferris   assert(!pinuse(chunk_plus_offset(p, sz)));
3250*2680e0c0SChristopher Ferris }
3251*2680e0c0SChristopher Ferris 
3252*2680e0c0SChristopher Ferris /* Check properties of (inuse) mmapped chunks */
do_check_mmapped_chunk(mstate m,mchunkptr p)3253*2680e0c0SChristopher Ferris static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
3254*2680e0c0SChristopher Ferris   size_t  sz = chunksize(p);
3255*2680e0c0SChristopher Ferris   size_t len = (sz + (p->prev_foot) + MMAP_FOOT_PAD);
3256*2680e0c0SChristopher Ferris   assert(is_mmapped(p));
3257*2680e0c0SChristopher Ferris   assert(use_mmap(m));
3258*2680e0c0SChristopher Ferris   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3259*2680e0c0SChristopher Ferris   assert(ok_address(m, p));
3260*2680e0c0SChristopher Ferris   assert(!is_small(sz));
3261*2680e0c0SChristopher Ferris   assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
3262*2680e0c0SChristopher Ferris   assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
3263*2680e0c0SChristopher Ferris   assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
3264*2680e0c0SChristopher Ferris }
3265*2680e0c0SChristopher Ferris 
3266*2680e0c0SChristopher Ferris /* Check properties of inuse chunks */
do_check_inuse_chunk(mstate m,mchunkptr p)3267*2680e0c0SChristopher Ferris static void do_check_inuse_chunk(mstate m, mchunkptr p) {
3268*2680e0c0SChristopher Ferris   do_check_any_chunk(m, p);
3269*2680e0c0SChristopher Ferris   assert(is_inuse(p));
3270*2680e0c0SChristopher Ferris   assert(next_pinuse(p));
3271*2680e0c0SChristopher Ferris   /* If not pinuse and not mmapped, previous chunk has OK offset */
3272*2680e0c0SChristopher Ferris   assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
3273*2680e0c0SChristopher Ferris   if (is_mmapped(p))
3274*2680e0c0SChristopher Ferris     do_check_mmapped_chunk(m, p);
3275*2680e0c0SChristopher Ferris }
3276*2680e0c0SChristopher Ferris 
3277*2680e0c0SChristopher Ferris /* Check properties of free chunks */
do_check_free_chunk(mstate m,mchunkptr p)3278*2680e0c0SChristopher Ferris static void do_check_free_chunk(mstate m, mchunkptr p) {
3279*2680e0c0SChristopher Ferris   size_t sz = chunksize(p);
3280*2680e0c0SChristopher Ferris   mchunkptr next = chunk_plus_offset(p, sz);
3281*2680e0c0SChristopher Ferris   do_check_any_chunk(m, p);
3282*2680e0c0SChristopher Ferris   assert(!is_inuse(p));
3283*2680e0c0SChristopher Ferris   assert(!next_pinuse(p));
3284*2680e0c0SChristopher Ferris   assert (!is_mmapped(p));
3285*2680e0c0SChristopher Ferris   if (p != m->dv && p != m->top) {
3286*2680e0c0SChristopher Ferris     if (sz >= MIN_CHUNK_SIZE) {
3287*2680e0c0SChristopher Ferris       assert((sz & CHUNK_ALIGN_MASK) == 0);
3288*2680e0c0SChristopher Ferris       assert(is_aligned(chunk2mem(p)));
3289*2680e0c0SChristopher Ferris       assert(next->prev_foot == sz);
3290*2680e0c0SChristopher Ferris       assert(pinuse(p));
3291*2680e0c0SChristopher Ferris       assert (next == m->top || is_inuse(next));
3292*2680e0c0SChristopher Ferris       assert(p->fd->bk == p);
3293*2680e0c0SChristopher Ferris       assert(p->bk->fd == p);
3294*2680e0c0SChristopher Ferris     }
3295*2680e0c0SChristopher Ferris     else  /* markers are always of size SIZE_T_SIZE */
3296*2680e0c0SChristopher Ferris       assert(sz == SIZE_T_SIZE);
3297*2680e0c0SChristopher Ferris   }
3298*2680e0c0SChristopher Ferris }
3299*2680e0c0SChristopher Ferris 
3300*2680e0c0SChristopher Ferris /* Check properties of malloced chunks at the point they are malloced */
do_check_malloced_chunk(mstate m,void * mem,size_t s)3301*2680e0c0SChristopher Ferris static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
3302*2680e0c0SChristopher Ferris   if (mem != 0) {
3303*2680e0c0SChristopher Ferris     mchunkptr p = mem2chunk(mem);
3304*2680e0c0SChristopher Ferris     size_t sz = p->head & ~INUSE_BITS;
3305*2680e0c0SChristopher Ferris     do_check_inuse_chunk(m, p);
3306*2680e0c0SChristopher Ferris     assert((sz & CHUNK_ALIGN_MASK) == 0);
3307*2680e0c0SChristopher Ferris     assert(sz >= MIN_CHUNK_SIZE);
3308*2680e0c0SChristopher Ferris     assert(sz >= s);
3309*2680e0c0SChristopher Ferris     /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
3310*2680e0c0SChristopher Ferris     assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
3311*2680e0c0SChristopher Ferris   }
3312*2680e0c0SChristopher Ferris }
3313*2680e0c0SChristopher Ferris 
3314*2680e0c0SChristopher Ferris /* Check a tree and its subtrees.  */
do_check_tree(mstate m,tchunkptr t)3315*2680e0c0SChristopher Ferris static void do_check_tree(mstate m, tchunkptr t) {
3316*2680e0c0SChristopher Ferris   tchunkptr head = 0;
3317*2680e0c0SChristopher Ferris   tchunkptr u = t;
3318*2680e0c0SChristopher Ferris   bindex_t tindex = t->index;
3319*2680e0c0SChristopher Ferris   size_t tsize = chunksize(t);
3320*2680e0c0SChristopher Ferris   bindex_t idx;
3321*2680e0c0SChristopher Ferris   compute_tree_index(tsize, idx);
3322*2680e0c0SChristopher Ferris   assert(tindex == idx);
3323*2680e0c0SChristopher Ferris   assert(tsize >= MIN_LARGE_SIZE);
3324*2680e0c0SChristopher Ferris   assert(tsize >= minsize_for_tree_index(idx));
3325*2680e0c0SChristopher Ferris   assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
3326*2680e0c0SChristopher Ferris 
3327*2680e0c0SChristopher Ferris   do { /* traverse through chain of same-sized nodes */
3328*2680e0c0SChristopher Ferris     do_check_any_chunk(m, ((mchunkptr)u));
3329*2680e0c0SChristopher Ferris     assert(u->index == tindex);
3330*2680e0c0SChristopher Ferris     assert(chunksize(u) == tsize);
3331*2680e0c0SChristopher Ferris     assert(!is_inuse(u));
3332*2680e0c0SChristopher Ferris     assert(!next_pinuse(u));
3333*2680e0c0SChristopher Ferris     assert(u->fd->bk == u);
3334*2680e0c0SChristopher Ferris     assert(u->bk->fd == u);
3335*2680e0c0SChristopher Ferris     if (u->parent == 0) {
3336*2680e0c0SChristopher Ferris       assert(u->child[0] == 0);
3337*2680e0c0SChristopher Ferris       assert(u->child[1] == 0);
3338*2680e0c0SChristopher Ferris     }
3339*2680e0c0SChristopher Ferris     else {
3340*2680e0c0SChristopher Ferris       assert(head == 0); /* only one node on chain has parent */
3341*2680e0c0SChristopher Ferris       head = u;
3342*2680e0c0SChristopher Ferris       assert(u->parent != u);
3343*2680e0c0SChristopher Ferris       assert (u->parent->child[0] == u ||
3344*2680e0c0SChristopher Ferris               u->parent->child[1] == u ||
3345*2680e0c0SChristopher Ferris               *((tbinptr*)(u->parent)) == u);
3346*2680e0c0SChristopher Ferris       if (u->child[0] != 0) {
3347*2680e0c0SChristopher Ferris         assert(u->child[0]->parent == u);
3348*2680e0c0SChristopher Ferris         assert(u->child[0] != u);
3349*2680e0c0SChristopher Ferris         do_check_tree(m, u->child[0]);
3350*2680e0c0SChristopher Ferris       }
3351*2680e0c0SChristopher Ferris       if (u->child[1] != 0) {
3352*2680e0c0SChristopher Ferris         assert(u->child[1]->parent == u);
3353*2680e0c0SChristopher Ferris         assert(u->child[1] != u);
3354*2680e0c0SChristopher Ferris         do_check_tree(m, u->child[1]);
3355*2680e0c0SChristopher Ferris       }
3356*2680e0c0SChristopher Ferris       if (u->child[0] != 0 && u->child[1] != 0) {
3357*2680e0c0SChristopher Ferris         assert(chunksize(u->child[0]) < chunksize(u->child[1]));
3358*2680e0c0SChristopher Ferris       }
3359*2680e0c0SChristopher Ferris     }
3360*2680e0c0SChristopher Ferris     u = u->fd;
3361*2680e0c0SChristopher Ferris   } while (u != t);
3362*2680e0c0SChristopher Ferris   assert(head != 0);
3363*2680e0c0SChristopher Ferris }
3364*2680e0c0SChristopher Ferris 
3365*2680e0c0SChristopher Ferris /*  Check all the chunks in a treebin.  */
do_check_treebin(mstate m,bindex_t i)3366*2680e0c0SChristopher Ferris static void do_check_treebin(mstate m, bindex_t i) {
3367*2680e0c0SChristopher Ferris   tbinptr* tb = treebin_at(m, i);
3368*2680e0c0SChristopher Ferris   tchunkptr t = *tb;
3369*2680e0c0SChristopher Ferris   int empty = (m->treemap & (1U << i)) == 0;
3370*2680e0c0SChristopher Ferris   if (t == 0)
3371*2680e0c0SChristopher Ferris     assert(empty);
3372*2680e0c0SChristopher Ferris   if (!empty)
3373*2680e0c0SChristopher Ferris     do_check_tree(m, t);
3374*2680e0c0SChristopher Ferris }
3375*2680e0c0SChristopher Ferris 
3376*2680e0c0SChristopher Ferris /*  Check all the chunks in a smallbin.  */
do_check_smallbin(mstate m,bindex_t i)3377*2680e0c0SChristopher Ferris static void do_check_smallbin(mstate m, bindex_t i) {
3378*2680e0c0SChristopher Ferris   sbinptr b = smallbin_at(m, i);
3379*2680e0c0SChristopher Ferris   mchunkptr p = b->bk;
3380*2680e0c0SChristopher Ferris   unsigned int empty = (m->smallmap & (1U << i)) == 0;
3381*2680e0c0SChristopher Ferris   if (p == b)
3382*2680e0c0SChristopher Ferris     assert(empty);
3383*2680e0c0SChristopher Ferris   if (!empty) {
3384*2680e0c0SChristopher Ferris     for (; p != b; p = p->bk) {
3385*2680e0c0SChristopher Ferris       size_t size = chunksize(p);
3386*2680e0c0SChristopher Ferris       mchunkptr q;
3387*2680e0c0SChristopher Ferris       /* each chunk claims to be free */
3388*2680e0c0SChristopher Ferris       do_check_free_chunk(m, p);
3389*2680e0c0SChristopher Ferris       /* chunk belongs in bin */
3390*2680e0c0SChristopher Ferris       assert(small_index(size) == i);
3391*2680e0c0SChristopher Ferris       assert(p->bk == b || chunksize(p->bk) == chunksize(p));
3392*2680e0c0SChristopher Ferris       /* chunk is followed by an inuse chunk */
3393*2680e0c0SChristopher Ferris       q = next_chunk(p);
3394*2680e0c0SChristopher Ferris       if (q->head != FENCEPOST_HEAD)
3395*2680e0c0SChristopher Ferris         do_check_inuse_chunk(m, q);
3396*2680e0c0SChristopher Ferris     }
3397*2680e0c0SChristopher Ferris   }
3398*2680e0c0SChristopher Ferris }
3399*2680e0c0SChristopher Ferris 
3400*2680e0c0SChristopher Ferris /* Find x in a bin. Used in other check functions. */
bin_find(mstate m,mchunkptr x)3401*2680e0c0SChristopher Ferris static int bin_find(mstate m, mchunkptr x) {
3402*2680e0c0SChristopher Ferris   size_t size = chunksize(x);
3403*2680e0c0SChristopher Ferris   if (is_small(size)) {
3404*2680e0c0SChristopher Ferris     bindex_t sidx = small_index(size);
3405*2680e0c0SChristopher Ferris     sbinptr b = smallbin_at(m, sidx);
3406*2680e0c0SChristopher Ferris     if (smallmap_is_marked(m, sidx)) {
3407*2680e0c0SChristopher Ferris       mchunkptr p = b;
3408*2680e0c0SChristopher Ferris       do {
3409*2680e0c0SChristopher Ferris         if (p == x)
3410*2680e0c0SChristopher Ferris           return 1;
3411*2680e0c0SChristopher Ferris       } while ((p = p->fd) != b);
3412*2680e0c0SChristopher Ferris     }
3413*2680e0c0SChristopher Ferris   }
3414*2680e0c0SChristopher Ferris   else {
3415*2680e0c0SChristopher Ferris     bindex_t tidx;
3416*2680e0c0SChristopher Ferris     compute_tree_index(size, tidx);
3417*2680e0c0SChristopher Ferris     if (treemap_is_marked(m, tidx)) {
3418*2680e0c0SChristopher Ferris       tchunkptr t = *treebin_at(m, tidx);
3419*2680e0c0SChristopher Ferris       size_t sizebits = size << leftshift_for_tree_index(tidx);
3420*2680e0c0SChristopher Ferris       while (t != 0 && chunksize(t) != size) {
3421*2680e0c0SChristopher Ferris         t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3422*2680e0c0SChristopher Ferris         sizebits <<= 1;
3423*2680e0c0SChristopher Ferris       }
3424*2680e0c0SChristopher Ferris       if (t != 0) {
3425*2680e0c0SChristopher Ferris         tchunkptr u = t;
3426*2680e0c0SChristopher Ferris         do {
3427*2680e0c0SChristopher Ferris           if (u == (tchunkptr)x)
3428*2680e0c0SChristopher Ferris             return 1;
3429*2680e0c0SChristopher Ferris         } while ((u = u->fd) != t);
3430*2680e0c0SChristopher Ferris       }
3431*2680e0c0SChristopher Ferris     }
3432*2680e0c0SChristopher Ferris   }
3433*2680e0c0SChristopher Ferris   return 0;
3434*2680e0c0SChristopher Ferris }
3435*2680e0c0SChristopher Ferris 
3436*2680e0c0SChristopher Ferris /* Traverse each chunk and check it; return total */
traverse_and_check(mstate m)3437*2680e0c0SChristopher Ferris static size_t traverse_and_check(mstate m) {
3438*2680e0c0SChristopher Ferris   size_t sum = 0;
3439*2680e0c0SChristopher Ferris   if (is_initialized(m)) {
3440*2680e0c0SChristopher Ferris     msegmentptr s = &m->seg;
3441*2680e0c0SChristopher Ferris     sum += m->topsize + TOP_FOOT_SIZE;
3442*2680e0c0SChristopher Ferris     while (s != 0) {
3443*2680e0c0SChristopher Ferris       mchunkptr q = align_as_chunk(s->base);
3444*2680e0c0SChristopher Ferris       mchunkptr lastq = 0;
3445*2680e0c0SChristopher Ferris       assert(pinuse(q));
3446*2680e0c0SChristopher Ferris       while (segment_holds(s, q) &&
3447*2680e0c0SChristopher Ferris              q != m->top && q->head != FENCEPOST_HEAD) {
3448*2680e0c0SChristopher Ferris         sum += chunksize(q);
3449*2680e0c0SChristopher Ferris         if (is_inuse(q)) {
3450*2680e0c0SChristopher Ferris           assert(!bin_find(m, q));
3451*2680e0c0SChristopher Ferris           do_check_inuse_chunk(m, q);
3452*2680e0c0SChristopher Ferris         }
3453*2680e0c0SChristopher Ferris         else {
3454*2680e0c0SChristopher Ferris           assert(q == m->dv || bin_find(m, q));
3455*2680e0c0SChristopher Ferris           assert(lastq == 0 || is_inuse(lastq)); /* Not 2 consecutive free */
3456*2680e0c0SChristopher Ferris           do_check_free_chunk(m, q);
3457*2680e0c0SChristopher Ferris         }
3458*2680e0c0SChristopher Ferris         lastq = q;
3459*2680e0c0SChristopher Ferris         q = next_chunk(q);
3460*2680e0c0SChristopher Ferris       }
3461*2680e0c0SChristopher Ferris       s = s->next;
3462*2680e0c0SChristopher Ferris     }
3463*2680e0c0SChristopher Ferris   }
3464*2680e0c0SChristopher Ferris   return sum;
3465*2680e0c0SChristopher Ferris }
3466*2680e0c0SChristopher Ferris 
3467*2680e0c0SChristopher Ferris 
3468*2680e0c0SChristopher Ferris /* Check all properties of malloc_state. */
do_check_malloc_state(mstate m)3469*2680e0c0SChristopher Ferris static void do_check_malloc_state(mstate m) {
3470*2680e0c0SChristopher Ferris   bindex_t i;
3471*2680e0c0SChristopher Ferris   size_t total;
3472*2680e0c0SChristopher Ferris   /* check bins */
3473*2680e0c0SChristopher Ferris   for (i = 0; i < NSMALLBINS; ++i)
3474*2680e0c0SChristopher Ferris     do_check_smallbin(m, i);
3475*2680e0c0SChristopher Ferris   for (i = 0; i < NTREEBINS; ++i)
3476*2680e0c0SChristopher Ferris     do_check_treebin(m, i);
3477*2680e0c0SChristopher Ferris 
3478*2680e0c0SChristopher Ferris   if (m->dvsize != 0) { /* check dv chunk */
3479*2680e0c0SChristopher Ferris     do_check_any_chunk(m, m->dv);
3480*2680e0c0SChristopher Ferris     assert(m->dvsize == chunksize(m->dv));
3481*2680e0c0SChristopher Ferris     assert(m->dvsize >= MIN_CHUNK_SIZE);
3482*2680e0c0SChristopher Ferris     assert(bin_find(m, m->dv) == 0);
3483*2680e0c0SChristopher Ferris   }
3484*2680e0c0SChristopher Ferris 
3485*2680e0c0SChristopher Ferris   if (m->top != 0) {   /* check top chunk */
3486*2680e0c0SChristopher Ferris     do_check_top_chunk(m, m->top);
3487*2680e0c0SChristopher Ferris     /*assert(m->topsize == chunksize(m->top)); redundant */
3488*2680e0c0SChristopher Ferris     assert(m->topsize > 0);
3489*2680e0c0SChristopher Ferris     assert(bin_find(m, m->top) == 0);
3490*2680e0c0SChristopher Ferris   }
3491*2680e0c0SChristopher Ferris 
3492*2680e0c0SChristopher Ferris   total = traverse_and_check(m);
3493*2680e0c0SChristopher Ferris   assert(total <= m->footprint);
3494*2680e0c0SChristopher Ferris   assert(m->footprint <= m->max_footprint);
3495*2680e0c0SChristopher Ferris }
3496*2680e0c0SChristopher Ferris #endif /* DEBUG */
3497*2680e0c0SChristopher Ferris 
3498*2680e0c0SChristopher Ferris /* ----------------------------- statistics ------------------------------ */
3499*2680e0c0SChristopher Ferris 
3500*2680e0c0SChristopher Ferris #if !NO_MALLINFO
internal_mallinfo(mstate m)3501*2680e0c0SChristopher Ferris static struct mallinfo internal_mallinfo(mstate m) {
3502*2680e0c0SChristopher Ferris   struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
3503*2680e0c0SChristopher Ferris   ensure_initialization();
3504*2680e0c0SChristopher Ferris   if (!PREACTION(m)) {
3505*2680e0c0SChristopher Ferris     check_malloc_state(m);
3506*2680e0c0SChristopher Ferris     if (is_initialized(m)) {
3507*2680e0c0SChristopher Ferris       size_t nfree = SIZE_T_ONE; /* top always free */
3508*2680e0c0SChristopher Ferris       size_t mfree = m->topsize + TOP_FOOT_SIZE;
3509*2680e0c0SChristopher Ferris       size_t sum = mfree;
3510*2680e0c0SChristopher Ferris       msegmentptr s = &m->seg;
3511*2680e0c0SChristopher Ferris       while (s != 0) {
3512*2680e0c0SChristopher Ferris         mchunkptr q = align_as_chunk(s->base);
3513*2680e0c0SChristopher Ferris         while (segment_holds(s, q) &&
3514*2680e0c0SChristopher Ferris                q != m->top && q->head != FENCEPOST_HEAD) {
3515*2680e0c0SChristopher Ferris           size_t sz = chunksize(q);
3516*2680e0c0SChristopher Ferris           sum += sz;
3517*2680e0c0SChristopher Ferris           if (!is_inuse(q)) {
3518*2680e0c0SChristopher Ferris             mfree += sz;
3519*2680e0c0SChristopher Ferris             ++nfree;
3520*2680e0c0SChristopher Ferris           }
3521*2680e0c0SChristopher Ferris           q = next_chunk(q);
3522*2680e0c0SChristopher Ferris         }
3523*2680e0c0SChristopher Ferris         s = s->next;
3524*2680e0c0SChristopher Ferris       }
3525*2680e0c0SChristopher Ferris 
3526*2680e0c0SChristopher Ferris       nm.arena    = sum;
3527*2680e0c0SChristopher Ferris       nm.ordblks  = nfree;
3528*2680e0c0SChristopher Ferris       nm.hblkhd   = m->footprint - sum;
3529*2680e0c0SChristopher Ferris       /* BEGIN android-changed: usmblks set to footprint from max_footprint */
3530*2680e0c0SChristopher Ferris       nm.usmblks  = m->footprint;
3531*2680e0c0SChristopher Ferris       /* END android-changed */
3532*2680e0c0SChristopher Ferris       nm.uordblks = m->footprint - mfree;
3533*2680e0c0SChristopher Ferris       nm.fordblks = mfree;
3534*2680e0c0SChristopher Ferris       nm.keepcost = m->topsize;
3535*2680e0c0SChristopher Ferris     }
3536*2680e0c0SChristopher Ferris 
3537*2680e0c0SChristopher Ferris     POSTACTION(m);
3538*2680e0c0SChristopher Ferris   }
3539*2680e0c0SChristopher Ferris   return nm;
3540*2680e0c0SChristopher Ferris }
3541*2680e0c0SChristopher Ferris #endif /* !NO_MALLINFO */
3542*2680e0c0SChristopher Ferris 
3543*2680e0c0SChristopher Ferris #if !NO_MALLOC_STATS
internal_malloc_stats(mstate m)3544*2680e0c0SChristopher Ferris static void internal_malloc_stats(mstate m) {
3545*2680e0c0SChristopher Ferris   ensure_initialization();
3546*2680e0c0SChristopher Ferris   if (!PREACTION(m)) {
3547*2680e0c0SChristopher Ferris     size_t maxfp = 0;
3548*2680e0c0SChristopher Ferris     size_t fp = 0;
3549*2680e0c0SChristopher Ferris     size_t used = 0;
3550*2680e0c0SChristopher Ferris     check_malloc_state(m);
3551*2680e0c0SChristopher Ferris     if (is_initialized(m)) {
3552*2680e0c0SChristopher Ferris       msegmentptr s = &m->seg;
3553*2680e0c0SChristopher Ferris       maxfp = m->max_footprint;
3554*2680e0c0SChristopher Ferris       fp = m->footprint;
3555*2680e0c0SChristopher Ferris       used = fp - (m->topsize + TOP_FOOT_SIZE);
3556*2680e0c0SChristopher Ferris 
3557*2680e0c0SChristopher Ferris       while (s != 0) {
3558*2680e0c0SChristopher Ferris         mchunkptr q = align_as_chunk(s->base);
3559*2680e0c0SChristopher Ferris         while (segment_holds(s, q) &&
3560*2680e0c0SChristopher Ferris                q != m->top && q->head != FENCEPOST_HEAD) {
3561*2680e0c0SChristopher Ferris           if (!is_inuse(q))
3562*2680e0c0SChristopher Ferris             used -= chunksize(q);
3563*2680e0c0SChristopher Ferris           q = next_chunk(q);
3564*2680e0c0SChristopher Ferris         }
3565*2680e0c0SChristopher Ferris         s = s->next;
3566*2680e0c0SChristopher Ferris       }
3567*2680e0c0SChristopher Ferris     }
3568*2680e0c0SChristopher Ferris     POSTACTION(m); /* drop lock */
3569*2680e0c0SChristopher Ferris     fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
3570*2680e0c0SChristopher Ferris     fprintf(stderr, "system bytes     = %10lu\n", (unsigned long)(fp));
3571*2680e0c0SChristopher Ferris     fprintf(stderr, "in use bytes     = %10lu\n", (unsigned long)(used));
3572*2680e0c0SChristopher Ferris   }
3573*2680e0c0SChristopher Ferris }
3574*2680e0c0SChristopher Ferris #endif /* NO_MALLOC_STATS */
3575*2680e0c0SChristopher Ferris 
3576*2680e0c0SChristopher Ferris /* ----------------------- Operations on smallbins ----------------------- */
3577*2680e0c0SChristopher Ferris 
3578*2680e0c0SChristopher Ferris /*
3579*2680e0c0SChristopher Ferris   Various forms of linking and unlinking are defined as macros.  Even
3580*2680e0c0SChristopher Ferris   the ones for trees, which are very long but have very short typical
3581*2680e0c0SChristopher Ferris   paths.  This is ugly but reduces reliance on inlining support of
3582*2680e0c0SChristopher Ferris   compilers.
3583*2680e0c0SChristopher Ferris */
3584*2680e0c0SChristopher Ferris 
3585*2680e0c0SChristopher Ferris /* Link a free chunk into a smallbin  */
3586*2680e0c0SChristopher Ferris #define insert_small_chunk(M, P, S) {\
3587*2680e0c0SChristopher Ferris   bindex_t I  = small_index(S);\
3588*2680e0c0SChristopher Ferris   mchunkptr B = smallbin_at(M, I);\
3589*2680e0c0SChristopher Ferris   mchunkptr F = B;\
3590*2680e0c0SChristopher Ferris   assert(S >= MIN_CHUNK_SIZE);\
3591*2680e0c0SChristopher Ferris   if (!smallmap_is_marked(M, I))\
3592*2680e0c0SChristopher Ferris     mark_smallmap(M, I);\
3593*2680e0c0SChristopher Ferris   else if (RTCHECK(ok_address(M, B->fd)))\
3594*2680e0c0SChristopher Ferris     F = B->fd;\
3595*2680e0c0SChristopher Ferris   else {\
3596*2680e0c0SChristopher Ferris     CORRUPTION_ERROR_ACTION(M);\
3597*2680e0c0SChristopher Ferris   }\
3598*2680e0c0SChristopher Ferris   B->fd = P;\
3599*2680e0c0SChristopher Ferris   F->bk = P;\
3600*2680e0c0SChristopher Ferris   P->fd = F;\
3601*2680e0c0SChristopher Ferris   P->bk = B;\
3602*2680e0c0SChristopher Ferris }
3603*2680e0c0SChristopher Ferris 
3604*2680e0c0SChristopher Ferris /* Unlink a chunk from a smallbin  */
3605*2680e0c0SChristopher Ferris #define unlink_small_chunk(M, P, S) {\
3606*2680e0c0SChristopher Ferris   mchunkptr F = P->fd;\
3607*2680e0c0SChristopher Ferris   mchunkptr B = P->bk;\
3608*2680e0c0SChristopher Ferris   bindex_t I = small_index(S);\
3609*2680e0c0SChristopher Ferris   assert(P != B);\
3610*2680e0c0SChristopher Ferris   assert(P != F);\
3611*2680e0c0SChristopher Ferris   assert(chunksize(P) == small_index2size(I));\
3612*2680e0c0SChristopher Ferris   if (RTCHECK(F == smallbin_at(M,I) || (ok_address(M, F) && F->bk == P))) { \
3613*2680e0c0SChristopher Ferris     if (B == F) {\
3614*2680e0c0SChristopher Ferris       clear_smallmap(M, I);\
3615*2680e0c0SChristopher Ferris     }\
3616*2680e0c0SChristopher Ferris     else if (RTCHECK(B == smallbin_at(M,I) ||\
3617*2680e0c0SChristopher Ferris                      (ok_address(M, B) && B->fd == P))) {\
3618*2680e0c0SChristopher Ferris       F->bk = B;\
3619*2680e0c0SChristopher Ferris       B->fd = F;\
3620*2680e0c0SChristopher Ferris     }\
3621*2680e0c0SChristopher Ferris     else {\
3622*2680e0c0SChristopher Ferris       CORRUPTION_ERROR_ACTION(M);\
3623*2680e0c0SChristopher Ferris     }\
3624*2680e0c0SChristopher Ferris   }\
3625*2680e0c0SChristopher Ferris   else {\
3626*2680e0c0SChristopher Ferris     CORRUPTION_ERROR_ACTION(M);\
3627*2680e0c0SChristopher Ferris   }\
3628*2680e0c0SChristopher Ferris }
3629*2680e0c0SChristopher Ferris 
3630*2680e0c0SChristopher Ferris /* Unlink the first chunk from a smallbin */
3631*2680e0c0SChristopher Ferris #define unlink_first_small_chunk(M, B, P, I) {\
3632*2680e0c0SChristopher Ferris   mchunkptr F = P->fd;\
3633*2680e0c0SChristopher Ferris   assert(P != B);\
3634*2680e0c0SChristopher Ferris   assert(P != F);\
3635*2680e0c0SChristopher Ferris   assert(chunksize(P) == small_index2size(I));\
3636*2680e0c0SChristopher Ferris   if (B == F) {\
3637*2680e0c0SChristopher Ferris     clear_smallmap(M, I);\
3638*2680e0c0SChristopher Ferris   }\
3639*2680e0c0SChristopher Ferris   else if (RTCHECK(ok_address(M, F) && F->bk == P)) {\
3640*2680e0c0SChristopher Ferris     F->bk = B;\
3641*2680e0c0SChristopher Ferris     B->fd = F;\
3642*2680e0c0SChristopher Ferris   }\
3643*2680e0c0SChristopher Ferris   else {\
3644*2680e0c0SChristopher Ferris     CORRUPTION_ERROR_ACTION(M);\
3645*2680e0c0SChristopher Ferris   }\
3646*2680e0c0SChristopher Ferris }
3647*2680e0c0SChristopher Ferris 
3648*2680e0c0SChristopher Ferris /* Replace dv node, binning the old one */
3649*2680e0c0SChristopher Ferris /* Used only when dvsize known to be small */
3650*2680e0c0SChristopher Ferris #define replace_dv(M, P, S) {\
3651*2680e0c0SChristopher Ferris   size_t DVS = M->dvsize;\
3652*2680e0c0SChristopher Ferris   assert(is_small(DVS));\
3653*2680e0c0SChristopher Ferris   if (DVS != 0) {\
3654*2680e0c0SChristopher Ferris     mchunkptr DV = M->dv;\
3655*2680e0c0SChristopher Ferris     insert_small_chunk(M, DV, DVS);\
3656*2680e0c0SChristopher Ferris   }\
3657*2680e0c0SChristopher Ferris   M->dvsize = S;\
3658*2680e0c0SChristopher Ferris   M->dv = P;\
3659*2680e0c0SChristopher Ferris }
3660*2680e0c0SChristopher Ferris 
3661*2680e0c0SChristopher Ferris /* ------------------------- Operations on trees ------------------------- */
3662*2680e0c0SChristopher Ferris 
3663*2680e0c0SChristopher Ferris /* Insert chunk into tree */
3664*2680e0c0SChristopher Ferris #define insert_large_chunk(M, X, S) {\
3665*2680e0c0SChristopher Ferris   tbinptr* H;\
3666*2680e0c0SChristopher Ferris   bindex_t I;\
3667*2680e0c0SChristopher Ferris   compute_tree_index(S, I);\
3668*2680e0c0SChristopher Ferris   H = treebin_at(M, I);\
3669*2680e0c0SChristopher Ferris   X->index = I;\
3670*2680e0c0SChristopher Ferris   X->child[0] = X->child[1] = 0;\
3671*2680e0c0SChristopher Ferris   if (!treemap_is_marked(M, I)) {\
3672*2680e0c0SChristopher Ferris     mark_treemap(M, I);\
3673*2680e0c0SChristopher Ferris     *H = X;\
3674*2680e0c0SChristopher Ferris     X->parent = (tchunkptr)H;\
3675*2680e0c0SChristopher Ferris     X->fd = X->bk = X;\
3676*2680e0c0SChristopher Ferris   }\
3677*2680e0c0SChristopher Ferris   else {\
3678*2680e0c0SChristopher Ferris     tchunkptr T = *H;\
3679*2680e0c0SChristopher Ferris     size_t K = S << leftshift_for_tree_index(I);\
3680*2680e0c0SChristopher Ferris     for (;;) {\
3681*2680e0c0SChristopher Ferris       if (chunksize(T) != S) {\
3682*2680e0c0SChristopher Ferris         tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
3683*2680e0c0SChristopher Ferris         K <<= 1;\
3684*2680e0c0SChristopher Ferris         if (*C != 0)\
3685*2680e0c0SChristopher Ferris           T = *C;\
3686*2680e0c0SChristopher Ferris         else if (RTCHECK(ok_address(M, C))) {\
3687*2680e0c0SChristopher Ferris           *C = X;\
3688*2680e0c0SChristopher Ferris           X->parent = T;\
3689*2680e0c0SChristopher Ferris           X->fd = X->bk = X;\
3690*2680e0c0SChristopher Ferris           break;\
3691*2680e0c0SChristopher Ferris         }\
3692*2680e0c0SChristopher Ferris         else {\
3693*2680e0c0SChristopher Ferris           CORRUPTION_ERROR_ACTION(M);\
3694*2680e0c0SChristopher Ferris           break;\
3695*2680e0c0SChristopher Ferris         }\
3696*2680e0c0SChristopher Ferris       }\
3697*2680e0c0SChristopher Ferris       else {\
3698*2680e0c0SChristopher Ferris         tchunkptr F = T->fd;\
3699*2680e0c0SChristopher Ferris         if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
3700*2680e0c0SChristopher Ferris           T->fd = F->bk = X;\
3701*2680e0c0SChristopher Ferris           X->fd = F;\
3702*2680e0c0SChristopher Ferris           X->bk = T;\
3703*2680e0c0SChristopher Ferris           X->parent = 0;\
3704*2680e0c0SChristopher Ferris           break;\
3705*2680e0c0SChristopher Ferris         }\
3706*2680e0c0SChristopher Ferris         else {\
3707*2680e0c0SChristopher Ferris           CORRUPTION_ERROR_ACTION(M);\
3708*2680e0c0SChristopher Ferris           break;\
3709*2680e0c0SChristopher Ferris         }\
3710*2680e0c0SChristopher Ferris       }\
3711*2680e0c0SChristopher Ferris     }\
3712*2680e0c0SChristopher Ferris   }\
3713*2680e0c0SChristopher Ferris }
3714*2680e0c0SChristopher Ferris 
3715*2680e0c0SChristopher Ferris /*
3716*2680e0c0SChristopher Ferris   Unlink steps:
3717*2680e0c0SChristopher Ferris 
3718*2680e0c0SChristopher Ferris   1. If x is a chained node, unlink it from its same-sized fd/bk links
3719*2680e0c0SChristopher Ferris      and choose its bk node as its replacement.
3720*2680e0c0SChristopher Ferris   2. If x was the last node of its size, but not a leaf node, it must
3721*2680e0c0SChristopher Ferris      be replaced with a leaf node (not merely one with an open left or
3722*2680e0c0SChristopher Ferris      right), to make sure that lefts and rights of descendents
3723*2680e0c0SChristopher Ferris      correspond properly to bit masks.  We use the rightmost descendent
3724*2680e0c0SChristopher Ferris      of x.  We could use any other leaf, but this is easy to locate and
3725*2680e0c0SChristopher Ferris      tends to counteract removal of leftmosts elsewhere, and so keeps
3726*2680e0c0SChristopher Ferris      paths shorter than minimally guaranteed.  This doesn't loop much
3727*2680e0c0SChristopher Ferris      because on average a node in a tree is near the bottom.
3728*2680e0c0SChristopher Ferris   3. If x is the base of a chain (i.e., has parent links) relink
3729*2680e0c0SChristopher Ferris      x's parent and children to x's replacement (or null if none).
3730*2680e0c0SChristopher Ferris */
3731*2680e0c0SChristopher Ferris 
3732*2680e0c0SChristopher Ferris #define unlink_large_chunk(M, X) {\
3733*2680e0c0SChristopher Ferris   tchunkptr XP = X->parent;\
3734*2680e0c0SChristopher Ferris   tchunkptr R;\
3735*2680e0c0SChristopher Ferris   if (X->bk != X) {\
3736*2680e0c0SChristopher Ferris     tchunkptr F = X->fd;\
3737*2680e0c0SChristopher Ferris     R = X->bk;\
3738*2680e0c0SChristopher Ferris     if (RTCHECK(ok_address(M, F) && F->bk == X && R->fd == X)) {\
3739*2680e0c0SChristopher Ferris       F->bk = R;\
3740*2680e0c0SChristopher Ferris       R->fd = F;\
3741*2680e0c0SChristopher Ferris     }\
3742*2680e0c0SChristopher Ferris     else {\
3743*2680e0c0SChristopher Ferris       CORRUPTION_ERROR_ACTION(M);\
3744*2680e0c0SChristopher Ferris     }\
3745*2680e0c0SChristopher Ferris   }\
3746*2680e0c0SChristopher Ferris   else {\
3747*2680e0c0SChristopher Ferris     tchunkptr* RP;\
3748*2680e0c0SChristopher Ferris     if (((R = *(RP = &(X->child[1]))) != 0) ||\
3749*2680e0c0SChristopher Ferris         ((R = *(RP = &(X->child[0]))) != 0)) {\
3750*2680e0c0SChristopher Ferris       tchunkptr* CP;\
3751*2680e0c0SChristopher Ferris       while ((*(CP = &(R->child[1])) != 0) ||\
3752*2680e0c0SChristopher Ferris              (*(CP = &(R->child[0])) != 0)) {\
3753*2680e0c0SChristopher Ferris         R = *(RP = CP);\
3754*2680e0c0SChristopher Ferris       }\
3755*2680e0c0SChristopher Ferris       if (RTCHECK(ok_address(M, RP)))\
3756*2680e0c0SChristopher Ferris         *RP = 0;\
3757*2680e0c0SChristopher Ferris       else {\
3758*2680e0c0SChristopher Ferris         CORRUPTION_ERROR_ACTION(M);\
3759*2680e0c0SChristopher Ferris       }\
3760*2680e0c0SChristopher Ferris     }\
3761*2680e0c0SChristopher Ferris   }\
3762*2680e0c0SChristopher Ferris   if (XP != 0) {\
3763*2680e0c0SChristopher Ferris     tbinptr* H = treebin_at(M, X->index);\
3764*2680e0c0SChristopher Ferris     if (X == *H) {\
3765*2680e0c0SChristopher Ferris       if ((*H = R) == 0) \
3766*2680e0c0SChristopher Ferris         clear_treemap(M, X->index);\
3767*2680e0c0SChristopher Ferris     }\
3768*2680e0c0SChristopher Ferris     else if (RTCHECK(ok_address(M, XP))) {\
3769*2680e0c0SChristopher Ferris       if (XP->child[0] == X) \
3770*2680e0c0SChristopher Ferris         XP->child[0] = R;\
3771*2680e0c0SChristopher Ferris       else \
3772*2680e0c0SChristopher Ferris         XP->child[1] = R;\
3773*2680e0c0SChristopher Ferris     }\
3774*2680e0c0SChristopher Ferris     else\
3775*2680e0c0SChristopher Ferris       CORRUPTION_ERROR_ACTION(M);\
3776*2680e0c0SChristopher Ferris     if (R != 0) {\
3777*2680e0c0SChristopher Ferris       if (RTCHECK(ok_address(M, R))) {\
3778*2680e0c0SChristopher Ferris         tchunkptr C0, C1;\
3779*2680e0c0SChristopher Ferris         R->parent = XP;\
3780*2680e0c0SChristopher Ferris         if ((C0 = X->child[0]) != 0) {\
3781*2680e0c0SChristopher Ferris           if (RTCHECK(ok_address(M, C0))) {\
3782*2680e0c0SChristopher Ferris             R->child[0] = C0;\
3783*2680e0c0SChristopher Ferris             C0->parent = R;\
3784*2680e0c0SChristopher Ferris           }\
3785*2680e0c0SChristopher Ferris           else\
3786*2680e0c0SChristopher Ferris             CORRUPTION_ERROR_ACTION(M);\
3787*2680e0c0SChristopher Ferris         }\
3788*2680e0c0SChristopher Ferris         if ((C1 = X->child[1]) != 0) {\
3789*2680e0c0SChristopher Ferris           if (RTCHECK(ok_address(M, C1))) {\
3790*2680e0c0SChristopher Ferris             R->child[1] = C1;\
3791*2680e0c0SChristopher Ferris             C1->parent = R;\
3792*2680e0c0SChristopher Ferris           }\
3793*2680e0c0SChristopher Ferris           else\
3794*2680e0c0SChristopher Ferris             CORRUPTION_ERROR_ACTION(M);\
3795*2680e0c0SChristopher Ferris         }\
3796*2680e0c0SChristopher Ferris       }\
3797*2680e0c0SChristopher Ferris       else\
3798*2680e0c0SChristopher Ferris         CORRUPTION_ERROR_ACTION(M);\
3799*2680e0c0SChristopher Ferris     }\
3800*2680e0c0SChristopher Ferris   }\
3801*2680e0c0SChristopher Ferris }
3802*2680e0c0SChristopher Ferris 
3803*2680e0c0SChristopher Ferris /* Relays to large vs small bin operations */
3804*2680e0c0SChristopher Ferris 
3805*2680e0c0SChristopher Ferris #define insert_chunk(M, P, S)\
3806*2680e0c0SChristopher Ferris   if (is_small(S)) insert_small_chunk(M, P, S)\
3807*2680e0c0SChristopher Ferris   else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3808*2680e0c0SChristopher Ferris 
3809*2680e0c0SChristopher Ferris #define unlink_chunk(M, P, S)\
3810*2680e0c0SChristopher Ferris   if (is_small(S)) unlink_small_chunk(M, P, S)\
3811*2680e0c0SChristopher Ferris   else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3812*2680e0c0SChristopher Ferris 
3813*2680e0c0SChristopher Ferris 
3814*2680e0c0SChristopher Ferris /* Relays to internal calls to malloc/free from realloc, memalign etc */
3815*2680e0c0SChristopher Ferris 
3816*2680e0c0SChristopher Ferris #if ONLY_MSPACES
3817*2680e0c0SChristopher Ferris #define internal_malloc(m, b) mspace_malloc(m, b)
3818*2680e0c0SChristopher Ferris #define internal_free(m, mem) mspace_free(m,mem);
3819*2680e0c0SChristopher Ferris #else /* ONLY_MSPACES */
3820*2680e0c0SChristopher Ferris #if MSPACES
3821*2680e0c0SChristopher Ferris #define internal_malloc(m, b)\
3822*2680e0c0SChristopher Ferris   ((m == gm)? dlmalloc(b) : mspace_malloc(m, b))
3823*2680e0c0SChristopher Ferris #define internal_free(m, mem)\
3824*2680e0c0SChristopher Ferris    if (m == gm) dlfree(mem); else mspace_free(m,mem);
3825*2680e0c0SChristopher Ferris #else /* MSPACES */
3826*2680e0c0SChristopher Ferris #define internal_malloc(m, b) dlmalloc(b)
3827*2680e0c0SChristopher Ferris #define internal_free(m, mem) dlfree(mem)
3828*2680e0c0SChristopher Ferris #endif /* MSPACES */
3829*2680e0c0SChristopher Ferris #endif /* ONLY_MSPACES */
3830*2680e0c0SChristopher Ferris 
3831*2680e0c0SChristopher Ferris /* -----------------------  Direct-mmapping chunks ----------------------- */
3832*2680e0c0SChristopher Ferris 
3833*2680e0c0SChristopher Ferris /*
3834*2680e0c0SChristopher Ferris   Directly mmapped chunks are set up with an offset to the start of
3835*2680e0c0SChristopher Ferris   the mmapped region stored in the prev_foot field of the chunk. This
3836*2680e0c0SChristopher Ferris   allows reconstruction of the required argument to MUNMAP when freed,
3837*2680e0c0SChristopher Ferris   and also allows adjustment of the returned chunk to meet alignment
3838*2680e0c0SChristopher Ferris   requirements (especially in memalign).
3839*2680e0c0SChristopher Ferris */
3840*2680e0c0SChristopher Ferris 
3841*2680e0c0SChristopher Ferris /* Malloc using mmap */
mmap_alloc(mstate m,size_t nb)3842*2680e0c0SChristopher Ferris static void* mmap_alloc(mstate m, size_t nb) {
3843*2680e0c0SChristopher Ferris   size_t mmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3844*2680e0c0SChristopher Ferris   if (m->footprint_limit != 0) {
3845*2680e0c0SChristopher Ferris     size_t fp = m->footprint + mmsize;
3846*2680e0c0SChristopher Ferris     if (fp <= m->footprint || fp > m->footprint_limit)
3847*2680e0c0SChristopher Ferris       return 0;
3848*2680e0c0SChristopher Ferris   }
3849*2680e0c0SChristopher Ferris   if (mmsize > nb) {     /* Check for wrap around 0 */
3850*2680e0c0SChristopher Ferris     char* mm = (char*)(CALL_DIRECT_MMAP(mmsize));
3851*2680e0c0SChristopher Ferris     if (mm != CMFAIL) {
3852*2680e0c0SChristopher Ferris       size_t offset = align_offset(chunk2mem(mm));
3853*2680e0c0SChristopher Ferris       size_t psize = mmsize - offset - MMAP_FOOT_PAD;
3854*2680e0c0SChristopher Ferris       mchunkptr p = (mchunkptr)(mm + offset);
3855*2680e0c0SChristopher Ferris       p->prev_foot = offset;
3856*2680e0c0SChristopher Ferris       p->head = psize;
3857*2680e0c0SChristopher Ferris       mark_inuse_foot(m, p, psize);
3858*2680e0c0SChristopher Ferris       chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
3859*2680e0c0SChristopher Ferris       chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
3860*2680e0c0SChristopher Ferris 
3861*2680e0c0SChristopher Ferris       if (m->least_addr == 0 || mm < m->least_addr)
3862*2680e0c0SChristopher Ferris         m->least_addr = mm;
3863*2680e0c0SChristopher Ferris       if ((m->footprint += mmsize) > m->max_footprint)
3864*2680e0c0SChristopher Ferris         m->max_footprint = m->footprint;
3865*2680e0c0SChristopher Ferris       assert(is_aligned(chunk2mem(p)));
3866*2680e0c0SChristopher Ferris       check_mmapped_chunk(m, p);
3867*2680e0c0SChristopher Ferris       return chunk2mem(p);
3868*2680e0c0SChristopher Ferris     }
3869*2680e0c0SChristopher Ferris   }
3870*2680e0c0SChristopher Ferris   return 0;
3871*2680e0c0SChristopher Ferris }
3872*2680e0c0SChristopher Ferris 
3873*2680e0c0SChristopher Ferris /* Realloc using mmap */
mmap_resize(mstate m,mchunkptr oldp,size_t nb,int flags)3874*2680e0c0SChristopher Ferris static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb, int flags) {
3875*2680e0c0SChristopher Ferris   size_t oldsize = chunksize(oldp);
3876*2680e0c0SChristopher Ferris   (void)flags; /* placate people compiling -Wunused */
3877*2680e0c0SChristopher Ferris   if (is_small(nb)) /* Can't shrink mmap regions below small size */
3878*2680e0c0SChristopher Ferris     return 0;
3879*2680e0c0SChristopher Ferris   /* Keep old chunk if big enough but not too big */
3880*2680e0c0SChristopher Ferris   if (oldsize >= nb + SIZE_T_SIZE &&
3881*2680e0c0SChristopher Ferris       (oldsize - nb) <= (mparams.granularity << 1))
3882*2680e0c0SChristopher Ferris     return oldp;
3883*2680e0c0SChristopher Ferris   else {
3884*2680e0c0SChristopher Ferris     size_t offset = oldp->prev_foot;
3885*2680e0c0SChristopher Ferris     size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
3886*2680e0c0SChristopher Ferris     size_t newmmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3887*2680e0c0SChristopher Ferris     char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
3888*2680e0c0SChristopher Ferris                                   oldmmsize, newmmsize, flags);
3889*2680e0c0SChristopher Ferris     if (cp != CMFAIL) {
3890*2680e0c0SChristopher Ferris       mchunkptr newp = (mchunkptr)(cp + offset);
3891*2680e0c0SChristopher Ferris       size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
3892*2680e0c0SChristopher Ferris       newp->head = psize;
3893*2680e0c0SChristopher Ferris       mark_inuse_foot(m, newp, psize);
3894*2680e0c0SChristopher Ferris       chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
3895*2680e0c0SChristopher Ferris       chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
3896*2680e0c0SChristopher Ferris 
3897*2680e0c0SChristopher Ferris       if (cp < m->least_addr)
3898*2680e0c0SChristopher Ferris         m->least_addr = cp;
3899*2680e0c0SChristopher Ferris       if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
3900*2680e0c0SChristopher Ferris         m->max_footprint = m->footprint;
3901*2680e0c0SChristopher Ferris       check_mmapped_chunk(m, newp);
3902*2680e0c0SChristopher Ferris       return newp;
3903*2680e0c0SChristopher Ferris     }
3904*2680e0c0SChristopher Ferris   }
3905*2680e0c0SChristopher Ferris   return 0;
3906*2680e0c0SChristopher Ferris }
3907*2680e0c0SChristopher Ferris 
3908*2680e0c0SChristopher Ferris 
3909*2680e0c0SChristopher Ferris /* -------------------------- mspace management -------------------------- */
3910*2680e0c0SChristopher Ferris 
3911*2680e0c0SChristopher Ferris /* Initialize top chunk and its size */
init_top(mstate m,mchunkptr p,size_t psize)3912*2680e0c0SChristopher Ferris static void init_top(mstate m, mchunkptr p, size_t psize) {
3913*2680e0c0SChristopher Ferris   /* Ensure alignment */
3914*2680e0c0SChristopher Ferris   size_t offset = align_offset(chunk2mem(p));
3915*2680e0c0SChristopher Ferris   p = (mchunkptr)((char*)p + offset);
3916*2680e0c0SChristopher Ferris   psize -= offset;
3917*2680e0c0SChristopher Ferris 
3918*2680e0c0SChristopher Ferris   m->top = p;
3919*2680e0c0SChristopher Ferris   m->topsize = psize;
3920*2680e0c0SChristopher Ferris   p->head = psize | PINUSE_BIT;
3921*2680e0c0SChristopher Ferris   /* set size of fake trailing chunk holding overhead space only once */
3922*2680e0c0SChristopher Ferris   chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
3923*2680e0c0SChristopher Ferris   m->trim_check = mparams.trim_threshold; /* reset on each update */
3924*2680e0c0SChristopher Ferris }
3925*2680e0c0SChristopher Ferris 
3926*2680e0c0SChristopher Ferris /* Initialize bins for a new mstate that is otherwise zeroed out */
init_bins(mstate m)3927*2680e0c0SChristopher Ferris static void init_bins(mstate m) {
3928*2680e0c0SChristopher Ferris   /* Establish circular links for smallbins */
3929*2680e0c0SChristopher Ferris   bindex_t i;
3930*2680e0c0SChristopher Ferris   for (i = 0; i < NSMALLBINS; ++i) {
3931*2680e0c0SChristopher Ferris     sbinptr bin = smallbin_at(m,i);
3932*2680e0c0SChristopher Ferris     bin->fd = bin->bk = bin;
3933*2680e0c0SChristopher Ferris   }
3934*2680e0c0SChristopher Ferris }
3935*2680e0c0SChristopher Ferris 
3936*2680e0c0SChristopher Ferris #if PROCEED_ON_ERROR
3937*2680e0c0SChristopher Ferris 
3938*2680e0c0SChristopher Ferris /* default corruption action */
reset_on_error(mstate m)3939*2680e0c0SChristopher Ferris static void reset_on_error(mstate m) {
3940*2680e0c0SChristopher Ferris   int i;
3941*2680e0c0SChristopher Ferris   ++malloc_corruption_error_count;
3942*2680e0c0SChristopher Ferris   /* Reinitialize fields to forget about all memory */
3943*2680e0c0SChristopher Ferris   m->smallmap = m->treemap = 0;
3944*2680e0c0SChristopher Ferris   m->dvsize = m->topsize = 0;
3945*2680e0c0SChristopher Ferris   m->seg.base = 0;
3946*2680e0c0SChristopher Ferris   m->seg.size = 0;
3947*2680e0c0SChristopher Ferris   m->seg.next = 0;
3948*2680e0c0SChristopher Ferris   m->top = m->dv = 0;
3949*2680e0c0SChristopher Ferris   for (i = 0; i < NTREEBINS; ++i)
3950*2680e0c0SChristopher Ferris     *treebin_at(m, i) = 0;
3951*2680e0c0SChristopher Ferris   init_bins(m);
3952*2680e0c0SChristopher Ferris }
3953*2680e0c0SChristopher Ferris #endif /* PROCEED_ON_ERROR */
3954*2680e0c0SChristopher Ferris 
3955*2680e0c0SChristopher Ferris /* Allocate chunk and prepend remainder with chunk in successor base. */
prepend_alloc(mstate m,char * newbase,char * oldbase,size_t nb)3956*2680e0c0SChristopher Ferris static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
3957*2680e0c0SChristopher Ferris                            size_t nb) {
3958*2680e0c0SChristopher Ferris   mchunkptr p = align_as_chunk(newbase);
3959*2680e0c0SChristopher Ferris   mchunkptr oldfirst = align_as_chunk(oldbase);
3960*2680e0c0SChristopher Ferris   size_t psize = (char*)oldfirst - (char*)p;
3961*2680e0c0SChristopher Ferris   mchunkptr q = chunk_plus_offset(p, nb);
3962*2680e0c0SChristopher Ferris   size_t qsize = psize - nb;
3963*2680e0c0SChristopher Ferris   set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3964*2680e0c0SChristopher Ferris 
3965*2680e0c0SChristopher Ferris   assert((char*)oldfirst > (char*)q);
3966*2680e0c0SChristopher Ferris   assert(pinuse(oldfirst));
3967*2680e0c0SChristopher Ferris   assert(qsize >= MIN_CHUNK_SIZE);
3968*2680e0c0SChristopher Ferris 
3969*2680e0c0SChristopher Ferris   /* consolidate remainder with first chunk of old base */
3970*2680e0c0SChristopher Ferris   if (oldfirst == m->top) {
3971*2680e0c0SChristopher Ferris     size_t tsize = m->topsize += qsize;
3972*2680e0c0SChristopher Ferris     m->top = q;
3973*2680e0c0SChristopher Ferris     q->head = tsize | PINUSE_BIT;
3974*2680e0c0SChristopher Ferris     check_top_chunk(m, q);
3975*2680e0c0SChristopher Ferris   }
3976*2680e0c0SChristopher Ferris   else if (oldfirst == m->dv) {
3977*2680e0c0SChristopher Ferris     size_t dsize = m->dvsize += qsize;
3978*2680e0c0SChristopher Ferris     m->dv = q;
3979*2680e0c0SChristopher Ferris     set_size_and_pinuse_of_free_chunk(q, dsize);
3980*2680e0c0SChristopher Ferris   }
3981*2680e0c0SChristopher Ferris   else {
3982*2680e0c0SChristopher Ferris     if (!is_inuse(oldfirst)) {
3983*2680e0c0SChristopher Ferris       size_t nsize = chunksize(oldfirst);
3984*2680e0c0SChristopher Ferris       unlink_chunk(m, oldfirst, nsize);
3985*2680e0c0SChristopher Ferris       oldfirst = chunk_plus_offset(oldfirst, nsize);
3986*2680e0c0SChristopher Ferris       qsize += nsize;
3987*2680e0c0SChristopher Ferris     }
3988*2680e0c0SChristopher Ferris     set_free_with_pinuse(q, qsize, oldfirst);
3989*2680e0c0SChristopher Ferris     insert_chunk(m, q, qsize);
3990*2680e0c0SChristopher Ferris     check_free_chunk(m, q);
3991*2680e0c0SChristopher Ferris   }
3992*2680e0c0SChristopher Ferris 
3993*2680e0c0SChristopher Ferris   check_malloced_chunk(m, chunk2mem(p), nb);
3994*2680e0c0SChristopher Ferris   return chunk2mem(p);
3995*2680e0c0SChristopher Ferris }
3996*2680e0c0SChristopher Ferris 
3997*2680e0c0SChristopher Ferris /* Add a segment to hold a new noncontiguous region */
add_segment(mstate m,char * tbase,size_t tsize,flag_t mmapped)3998*2680e0c0SChristopher Ferris static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
3999*2680e0c0SChristopher Ferris   /* Determine locations and sizes of segment, fenceposts, old top */
4000*2680e0c0SChristopher Ferris   char* old_top = (char*)m->top;
4001*2680e0c0SChristopher Ferris   msegmentptr oldsp = segment_holding(m, old_top);
4002*2680e0c0SChristopher Ferris   char* old_end = oldsp->base + oldsp->size;
4003*2680e0c0SChristopher Ferris   size_t ssize = pad_request(sizeof(struct malloc_segment));
4004*2680e0c0SChristopher Ferris   char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
4005*2680e0c0SChristopher Ferris   size_t offset = align_offset(chunk2mem(rawsp));
4006*2680e0c0SChristopher Ferris   char* asp = rawsp + offset;
4007*2680e0c0SChristopher Ferris   char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
4008*2680e0c0SChristopher Ferris   mchunkptr sp = (mchunkptr)csp;
4009*2680e0c0SChristopher Ferris   msegmentptr ss = (msegmentptr)(chunk2mem(sp));
4010*2680e0c0SChristopher Ferris   mchunkptr tnext = chunk_plus_offset(sp, ssize);
4011*2680e0c0SChristopher Ferris   mchunkptr p = tnext;
4012*2680e0c0SChristopher Ferris   /* Only used in assert. */
4013*2680e0c0SChristopher Ferris   [[maybe_unused]] int nfences = 0;
4014*2680e0c0SChristopher Ferris 
4015*2680e0c0SChristopher Ferris   /* reset top to new space */
4016*2680e0c0SChristopher Ferris   init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
4017*2680e0c0SChristopher Ferris 
4018*2680e0c0SChristopher Ferris   /* Set up segment record */
4019*2680e0c0SChristopher Ferris   assert(is_aligned(ss));
4020*2680e0c0SChristopher Ferris   set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
4021*2680e0c0SChristopher Ferris   *ss = m->seg; /* Push current record */
4022*2680e0c0SChristopher Ferris   m->seg.base = tbase;
4023*2680e0c0SChristopher Ferris   m->seg.size = tsize;
4024*2680e0c0SChristopher Ferris   m->seg.sflags = mmapped;
4025*2680e0c0SChristopher Ferris   m->seg.next = ss;
4026*2680e0c0SChristopher Ferris 
4027*2680e0c0SChristopher Ferris   /* Insert trailing fenceposts */
4028*2680e0c0SChristopher Ferris   for (;;) {
4029*2680e0c0SChristopher Ferris     mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
4030*2680e0c0SChristopher Ferris     p->head = FENCEPOST_HEAD;
4031*2680e0c0SChristopher Ferris     ++nfences;
4032*2680e0c0SChristopher Ferris     if ((char*)(&(nextp->head)) < old_end)
4033*2680e0c0SChristopher Ferris       p = nextp;
4034*2680e0c0SChristopher Ferris     else
4035*2680e0c0SChristopher Ferris       break;
4036*2680e0c0SChristopher Ferris   }
4037*2680e0c0SChristopher Ferris   assert(nfences >= 2);
4038*2680e0c0SChristopher Ferris 
4039*2680e0c0SChristopher Ferris   /* Insert the rest of old top into a bin as an ordinary free chunk */
4040*2680e0c0SChristopher Ferris   if (csp != old_top) {
4041*2680e0c0SChristopher Ferris     mchunkptr q = (mchunkptr)old_top;
4042*2680e0c0SChristopher Ferris     size_t psize = csp - old_top;
4043*2680e0c0SChristopher Ferris     mchunkptr tn = chunk_plus_offset(q, psize);
4044*2680e0c0SChristopher Ferris     set_free_with_pinuse(q, psize, tn);
4045*2680e0c0SChristopher Ferris     insert_chunk(m, q, psize);
4046*2680e0c0SChristopher Ferris   }
4047*2680e0c0SChristopher Ferris 
4048*2680e0c0SChristopher Ferris   check_top_chunk(m, m->top);
4049*2680e0c0SChristopher Ferris }
4050*2680e0c0SChristopher Ferris 
4051*2680e0c0SChristopher Ferris /* -------------------------- System allocation -------------------------- */
4052*2680e0c0SChristopher Ferris 
4053*2680e0c0SChristopher Ferris /* Get memory from system using MORECORE or MMAP */
sys_alloc(mstate m,size_t nb)4054*2680e0c0SChristopher Ferris static void* sys_alloc(mstate m, size_t nb) {
4055*2680e0c0SChristopher Ferris   char* tbase = CMFAIL;
4056*2680e0c0SChristopher Ferris   size_t tsize = 0;
4057*2680e0c0SChristopher Ferris   flag_t mmap_flag = 0;
4058*2680e0c0SChristopher Ferris   size_t asize; /* allocation size */
4059*2680e0c0SChristopher Ferris 
4060*2680e0c0SChristopher Ferris   ensure_initialization();
4061*2680e0c0SChristopher Ferris 
4062*2680e0c0SChristopher Ferris   /* Directly map large chunks, but only if already initialized */
4063*2680e0c0SChristopher Ferris   if (use_mmap(m) && nb >= mparams.mmap_threshold && m->topsize != 0) {
4064*2680e0c0SChristopher Ferris     void* mem = mmap_alloc(m, nb);
4065*2680e0c0SChristopher Ferris     if (mem != 0)
4066*2680e0c0SChristopher Ferris       return mem;
4067*2680e0c0SChristopher Ferris   }
4068*2680e0c0SChristopher Ferris 
4069*2680e0c0SChristopher Ferris   asize = granularity_align(nb + SYS_ALLOC_PADDING);
4070*2680e0c0SChristopher Ferris   if (asize <= nb) {
4071*2680e0c0SChristopher Ferris     /* BEGIN android-added: set errno */
4072*2680e0c0SChristopher Ferris     MALLOC_FAILURE_ACTION;
4073*2680e0c0SChristopher Ferris     /* END android-added */
4074*2680e0c0SChristopher Ferris     return 0; /* wraparound */
4075*2680e0c0SChristopher Ferris   }
4076*2680e0c0SChristopher Ferris   if (m->footprint_limit != 0) {
4077*2680e0c0SChristopher Ferris     size_t fp = m->footprint + asize;
4078*2680e0c0SChristopher Ferris     if (fp <= m->footprint || fp > m->footprint_limit) {
4079*2680e0c0SChristopher Ferris       /* BEGIN android-added: set errno */
4080*2680e0c0SChristopher Ferris       MALLOC_FAILURE_ACTION;
4081*2680e0c0SChristopher Ferris       /* END android-added */
4082*2680e0c0SChristopher Ferris       return 0;
4083*2680e0c0SChristopher Ferris     }
4084*2680e0c0SChristopher Ferris   }
4085*2680e0c0SChristopher Ferris 
4086*2680e0c0SChristopher Ferris   /*
4087*2680e0c0SChristopher Ferris     Try getting memory in any of three ways (in most-preferred to
4088*2680e0c0SChristopher Ferris     least-preferred order):
4089*2680e0c0SChristopher Ferris     1. A call to MORECORE that can normally contiguously extend memory.
4090*2680e0c0SChristopher Ferris        (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
4091*2680e0c0SChristopher Ferris        or main space is mmapped or a previous contiguous call failed)
4092*2680e0c0SChristopher Ferris     2. A call to MMAP new space (disabled if not HAVE_MMAP).
4093*2680e0c0SChristopher Ferris        Note that under the default settings, if MORECORE is unable to
4094*2680e0c0SChristopher Ferris        fulfill a request, and HAVE_MMAP is true, then mmap is
4095*2680e0c0SChristopher Ferris        used as a noncontiguous system allocator. This is a useful backup
4096*2680e0c0SChristopher Ferris        strategy for systems with holes in address spaces -- in this case
4097*2680e0c0SChristopher Ferris        sbrk cannot contiguously expand the heap, but mmap may be able to
4098*2680e0c0SChristopher Ferris        find space.
4099*2680e0c0SChristopher Ferris     3. A call to MORECORE that cannot usually contiguously extend memory.
4100*2680e0c0SChristopher Ferris        (disabled if not HAVE_MORECORE)
4101*2680e0c0SChristopher Ferris 
4102*2680e0c0SChristopher Ferris    In all cases, we need to request enough bytes from system to ensure
4103*2680e0c0SChristopher Ferris    we can malloc nb bytes upon success, so pad with enough space for
4104*2680e0c0SChristopher Ferris    top_foot, plus alignment-pad to make sure we don't lose bytes if
4105*2680e0c0SChristopher Ferris    not on boundary, and round this up to a granularity unit.
4106*2680e0c0SChristopher Ferris   */
4107*2680e0c0SChristopher Ferris 
4108*2680e0c0SChristopher Ferris   if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
4109*2680e0c0SChristopher Ferris     char* br = CMFAIL;
4110*2680e0c0SChristopher Ferris     size_t ssize = asize; /* sbrk call size */
4111*2680e0c0SChristopher Ferris     msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
4112*2680e0c0SChristopher Ferris     ACQUIRE_MALLOC_GLOBAL_LOCK();
4113*2680e0c0SChristopher Ferris 
4114*2680e0c0SChristopher Ferris     if (ss == 0) {  /* First time through or recovery */
4115*2680e0c0SChristopher Ferris       char* base = (char*)CALL_MORECORE(0);
4116*2680e0c0SChristopher Ferris       if (base != CMFAIL) {
4117*2680e0c0SChristopher Ferris         size_t fp;
4118*2680e0c0SChristopher Ferris         /* Adjust to end on a page boundary */
4119*2680e0c0SChristopher Ferris         if (!is_page_aligned(base))
4120*2680e0c0SChristopher Ferris           ssize += (page_align((size_t)base) - (size_t)base);
4121*2680e0c0SChristopher Ferris         fp = m->footprint + ssize; /* recheck limits */
4122*2680e0c0SChristopher Ferris         if (ssize > nb && ssize < HALF_MAX_SIZE_T &&
4123*2680e0c0SChristopher Ferris             (m->footprint_limit == 0 ||
4124*2680e0c0SChristopher Ferris              (fp > m->footprint && fp <= m->footprint_limit)) &&
4125*2680e0c0SChristopher Ferris             (br = (char*)(CALL_MORECORE(ssize))) == base) {
4126*2680e0c0SChristopher Ferris           tbase = base;
4127*2680e0c0SChristopher Ferris           tsize = ssize;
4128*2680e0c0SChristopher Ferris         }
4129*2680e0c0SChristopher Ferris       }
4130*2680e0c0SChristopher Ferris     }
4131*2680e0c0SChristopher Ferris     else {
4132*2680e0c0SChristopher Ferris       /* Subtract out existing available top space from MORECORE request. */
4133*2680e0c0SChristopher Ferris       ssize = granularity_align(nb - m->topsize + SYS_ALLOC_PADDING);
4134*2680e0c0SChristopher Ferris       /* Use mem here only if it did continuously extend old space */
4135*2680e0c0SChristopher Ferris       if (ssize < HALF_MAX_SIZE_T &&
4136*2680e0c0SChristopher Ferris           (br = (char*)(CALL_MORECORE(ssize))) == ss->base+ss->size) {
4137*2680e0c0SChristopher Ferris         tbase = br;
4138*2680e0c0SChristopher Ferris         tsize = ssize;
4139*2680e0c0SChristopher Ferris       }
4140*2680e0c0SChristopher Ferris     }
4141*2680e0c0SChristopher Ferris 
4142*2680e0c0SChristopher Ferris     if (tbase == CMFAIL) {    /* Cope with partial failure */
4143*2680e0c0SChristopher Ferris       if (br != CMFAIL) {    /* Try to use/extend the space we did get */
4144*2680e0c0SChristopher Ferris         if (ssize < HALF_MAX_SIZE_T &&
4145*2680e0c0SChristopher Ferris             ssize < nb + SYS_ALLOC_PADDING) {
4146*2680e0c0SChristopher Ferris           size_t esize = granularity_align(nb + SYS_ALLOC_PADDING - ssize);
4147*2680e0c0SChristopher Ferris           if (esize < HALF_MAX_SIZE_T) {
4148*2680e0c0SChristopher Ferris             char* end = (char*)CALL_MORECORE(esize);
4149*2680e0c0SChristopher Ferris             if (end != CMFAIL)
4150*2680e0c0SChristopher Ferris               ssize += esize;
4151*2680e0c0SChristopher Ferris             else {            /* Can't use; try to release */
4152*2680e0c0SChristopher Ferris               (void) CALL_MORECORE(-ssize);
4153*2680e0c0SChristopher Ferris               br = CMFAIL;
4154*2680e0c0SChristopher Ferris             }
4155*2680e0c0SChristopher Ferris           }
4156*2680e0c0SChristopher Ferris         }
4157*2680e0c0SChristopher Ferris       }
4158*2680e0c0SChristopher Ferris       if (br != CMFAIL) {    /* Use the space we did get */
4159*2680e0c0SChristopher Ferris         tbase = br;
4160*2680e0c0SChristopher Ferris         tsize = ssize;
4161*2680e0c0SChristopher Ferris       }
4162*2680e0c0SChristopher Ferris       else
4163*2680e0c0SChristopher Ferris         disable_contiguous(m); /* Don't try contiguous path in the future */
4164*2680e0c0SChristopher Ferris     }
4165*2680e0c0SChristopher Ferris 
4166*2680e0c0SChristopher Ferris     RELEASE_MALLOC_GLOBAL_LOCK();
4167*2680e0c0SChristopher Ferris   }
4168*2680e0c0SChristopher Ferris 
4169*2680e0c0SChristopher Ferris   if (HAVE_MMAP && tbase == CMFAIL) {  /* Try MMAP */
4170*2680e0c0SChristopher Ferris     char* mp = (char*)(CALL_MMAP(asize));
4171*2680e0c0SChristopher Ferris     if (mp != CMFAIL) {
4172*2680e0c0SChristopher Ferris       tbase = mp;
4173*2680e0c0SChristopher Ferris       tsize = asize;
4174*2680e0c0SChristopher Ferris       mmap_flag = USE_MMAP_BIT;
4175*2680e0c0SChristopher Ferris     }
4176*2680e0c0SChristopher Ferris   }
4177*2680e0c0SChristopher Ferris 
4178*2680e0c0SChristopher Ferris   if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
4179*2680e0c0SChristopher Ferris     if (asize < HALF_MAX_SIZE_T) {
4180*2680e0c0SChristopher Ferris       char* br = CMFAIL;
4181*2680e0c0SChristopher Ferris       char* end = CMFAIL;
4182*2680e0c0SChristopher Ferris       ACQUIRE_MALLOC_GLOBAL_LOCK();
4183*2680e0c0SChristopher Ferris       br = (char*)(CALL_MORECORE(asize));
4184*2680e0c0SChristopher Ferris       end = (char*)(CALL_MORECORE(0));
4185*2680e0c0SChristopher Ferris       RELEASE_MALLOC_GLOBAL_LOCK();
4186*2680e0c0SChristopher Ferris       if (br != CMFAIL && end != CMFAIL && br < end) {
4187*2680e0c0SChristopher Ferris         size_t ssize = end - br;
4188*2680e0c0SChristopher Ferris         if (ssize > nb + TOP_FOOT_SIZE) {
4189*2680e0c0SChristopher Ferris           tbase = br;
4190*2680e0c0SChristopher Ferris           tsize = ssize;
4191*2680e0c0SChristopher Ferris         }
4192*2680e0c0SChristopher Ferris       }
4193*2680e0c0SChristopher Ferris     }
4194*2680e0c0SChristopher Ferris   }
4195*2680e0c0SChristopher Ferris 
4196*2680e0c0SChristopher Ferris   if (tbase != CMFAIL) {
4197*2680e0c0SChristopher Ferris 
4198*2680e0c0SChristopher Ferris     if ((m->footprint += tsize) > m->max_footprint)
4199*2680e0c0SChristopher Ferris       m->max_footprint = m->footprint;
4200*2680e0c0SChristopher Ferris 
4201*2680e0c0SChristopher Ferris     if (!is_initialized(m)) { /* first-time initialization */
4202*2680e0c0SChristopher Ferris       if (m->least_addr == 0 || tbase < m->least_addr)
4203*2680e0c0SChristopher Ferris         m->least_addr = tbase;
4204*2680e0c0SChristopher Ferris       m->seg.base = tbase;
4205*2680e0c0SChristopher Ferris       m->seg.size = tsize;
4206*2680e0c0SChristopher Ferris       m->seg.sflags = mmap_flag;
4207*2680e0c0SChristopher Ferris       m->magic = mparams.magic;
4208*2680e0c0SChristopher Ferris       m->release_checks = MAX_RELEASE_CHECK_RATE;
4209*2680e0c0SChristopher Ferris       init_bins(m);
4210*2680e0c0SChristopher Ferris #if !ONLY_MSPACES
4211*2680e0c0SChristopher Ferris       if (is_global(m))
4212*2680e0c0SChristopher Ferris         init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
4213*2680e0c0SChristopher Ferris       else
4214*2680e0c0SChristopher Ferris #endif
4215*2680e0c0SChristopher Ferris       {
4216*2680e0c0SChristopher Ferris         /* Offset top by embedded malloc_state */
4217*2680e0c0SChristopher Ferris         mchunkptr mn = next_chunk(mem2chunk(m));
4218*2680e0c0SChristopher Ferris         init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
4219*2680e0c0SChristopher Ferris       }
4220*2680e0c0SChristopher Ferris     }
4221*2680e0c0SChristopher Ferris 
4222*2680e0c0SChristopher Ferris     else {
4223*2680e0c0SChristopher Ferris       /* Try to merge with an existing segment */
4224*2680e0c0SChristopher Ferris       msegmentptr sp = &m->seg;
4225*2680e0c0SChristopher Ferris       /* Only consider most recent segment if traversal suppressed */
4226*2680e0c0SChristopher Ferris       while (sp != 0 && tbase != sp->base + sp->size)
4227*2680e0c0SChristopher Ferris         sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4228*2680e0c0SChristopher Ferris       if (sp != 0 &&
4229*2680e0c0SChristopher Ferris           !is_extern_segment(sp) &&
4230*2680e0c0SChristopher Ferris           (sp->sflags & USE_MMAP_BIT) == mmap_flag &&
4231*2680e0c0SChristopher Ferris           segment_holds(sp, m->top)) { /* append */
4232*2680e0c0SChristopher Ferris         sp->size += tsize;
4233*2680e0c0SChristopher Ferris         init_top(m, m->top, m->topsize + tsize);
4234*2680e0c0SChristopher Ferris       }
4235*2680e0c0SChristopher Ferris       else {
4236*2680e0c0SChristopher Ferris         if (tbase < m->least_addr)
4237*2680e0c0SChristopher Ferris           m->least_addr = tbase;
4238*2680e0c0SChristopher Ferris         sp = &m->seg;
4239*2680e0c0SChristopher Ferris         while (sp != 0 && sp->base != tbase + tsize)
4240*2680e0c0SChristopher Ferris           sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4241*2680e0c0SChristopher Ferris         if (sp != 0 &&
4242*2680e0c0SChristopher Ferris             !is_extern_segment(sp) &&
4243*2680e0c0SChristopher Ferris             (sp->sflags & USE_MMAP_BIT) == mmap_flag) {
4244*2680e0c0SChristopher Ferris           char* oldbase = sp->base;
4245*2680e0c0SChristopher Ferris           sp->base = tbase;
4246*2680e0c0SChristopher Ferris           sp->size += tsize;
4247*2680e0c0SChristopher Ferris           return prepend_alloc(m, tbase, oldbase, nb);
4248*2680e0c0SChristopher Ferris         }
4249*2680e0c0SChristopher Ferris         else
4250*2680e0c0SChristopher Ferris           add_segment(m, tbase, tsize, mmap_flag);
4251*2680e0c0SChristopher Ferris       }
4252*2680e0c0SChristopher Ferris     }
4253*2680e0c0SChristopher Ferris 
4254*2680e0c0SChristopher Ferris     if (nb < m->topsize) { /* Allocate from new or extended top space */
4255*2680e0c0SChristopher Ferris       size_t rsize = m->topsize -= nb;
4256*2680e0c0SChristopher Ferris       mchunkptr p = m->top;
4257*2680e0c0SChristopher Ferris       mchunkptr r = m->top = chunk_plus_offset(p, nb);
4258*2680e0c0SChristopher Ferris       r->head = rsize | PINUSE_BIT;
4259*2680e0c0SChristopher Ferris       set_size_and_pinuse_of_inuse_chunk(m, p, nb);
4260*2680e0c0SChristopher Ferris       check_top_chunk(m, m->top);
4261*2680e0c0SChristopher Ferris       check_malloced_chunk(m, chunk2mem(p), nb);
4262*2680e0c0SChristopher Ferris       return chunk2mem(p);
4263*2680e0c0SChristopher Ferris     }
4264*2680e0c0SChristopher Ferris   }
4265*2680e0c0SChristopher Ferris 
4266*2680e0c0SChristopher Ferris   MALLOC_FAILURE_ACTION;
4267*2680e0c0SChristopher Ferris   return 0;
4268*2680e0c0SChristopher Ferris }
4269*2680e0c0SChristopher Ferris 
4270*2680e0c0SChristopher Ferris /* -----------------------  system deallocation -------------------------- */
4271*2680e0c0SChristopher Ferris 
4272*2680e0c0SChristopher Ferris /* Unmap and unlink any mmapped segments that don't contain used chunks */
release_unused_segments(mstate m)4273*2680e0c0SChristopher Ferris static size_t release_unused_segments(mstate m) {
4274*2680e0c0SChristopher Ferris   size_t released = 0;
4275*2680e0c0SChristopher Ferris   int nsegs = 0;
4276*2680e0c0SChristopher Ferris   msegmentptr pred = &m->seg;
4277*2680e0c0SChristopher Ferris   msegmentptr sp = pred->next;
4278*2680e0c0SChristopher Ferris   while (sp != 0) {
4279*2680e0c0SChristopher Ferris     char* base = sp->base;
4280*2680e0c0SChristopher Ferris     size_t size = sp->size;
4281*2680e0c0SChristopher Ferris     msegmentptr next = sp->next;
4282*2680e0c0SChristopher Ferris     ++nsegs;
4283*2680e0c0SChristopher Ferris     if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
4284*2680e0c0SChristopher Ferris       mchunkptr p = align_as_chunk(base);
4285*2680e0c0SChristopher Ferris       size_t psize = chunksize(p);
4286*2680e0c0SChristopher Ferris       /* Can unmap if first chunk holds entire segment and not pinned */
4287*2680e0c0SChristopher Ferris       if (!is_inuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
4288*2680e0c0SChristopher Ferris         tchunkptr tp = (tchunkptr)p;
4289*2680e0c0SChristopher Ferris         assert(segment_holds(sp, (char*)sp));
4290*2680e0c0SChristopher Ferris         if (p == m->dv) {
4291*2680e0c0SChristopher Ferris           m->dv = 0;
4292*2680e0c0SChristopher Ferris           m->dvsize = 0;
4293*2680e0c0SChristopher Ferris         }
4294*2680e0c0SChristopher Ferris         else {
4295*2680e0c0SChristopher Ferris           unlink_large_chunk(m, tp);
4296*2680e0c0SChristopher Ferris         }
4297*2680e0c0SChristopher Ferris         if (CALL_MUNMAP(base, size) == 0) {
4298*2680e0c0SChristopher Ferris           released += size;
4299*2680e0c0SChristopher Ferris           m->footprint -= size;
4300*2680e0c0SChristopher Ferris           /* unlink obsoleted record */
4301*2680e0c0SChristopher Ferris           sp = pred;
4302*2680e0c0SChristopher Ferris           sp->next = next;
4303*2680e0c0SChristopher Ferris         }
4304*2680e0c0SChristopher Ferris         else { /* back out if cannot unmap */
4305*2680e0c0SChristopher Ferris           insert_large_chunk(m, tp, psize);
4306*2680e0c0SChristopher Ferris         }
4307*2680e0c0SChristopher Ferris       }
4308*2680e0c0SChristopher Ferris     }
4309*2680e0c0SChristopher Ferris     if (NO_SEGMENT_TRAVERSAL) /* scan only first segment */
4310*2680e0c0SChristopher Ferris       break;
4311*2680e0c0SChristopher Ferris     pred = sp;
4312*2680e0c0SChristopher Ferris     sp = next;
4313*2680e0c0SChristopher Ferris   }
4314*2680e0c0SChristopher Ferris   /* Reset check counter */
4315*2680e0c0SChristopher Ferris   m->release_checks = (((size_t) nsegs > (size_t) MAX_RELEASE_CHECK_RATE)?
4316*2680e0c0SChristopher Ferris                        (size_t) nsegs : (size_t) MAX_RELEASE_CHECK_RATE);
4317*2680e0c0SChristopher Ferris   return released;
4318*2680e0c0SChristopher Ferris }
4319*2680e0c0SChristopher Ferris 
sys_trim(mstate m,size_t pad)4320*2680e0c0SChristopher Ferris static int sys_trim(mstate m, size_t pad) {
4321*2680e0c0SChristopher Ferris   size_t released = 0;
4322*2680e0c0SChristopher Ferris   ensure_initialization();
4323*2680e0c0SChristopher Ferris   if (pad < MAX_REQUEST && is_initialized(m)) {
4324*2680e0c0SChristopher Ferris     pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
4325*2680e0c0SChristopher Ferris 
4326*2680e0c0SChristopher Ferris     if (m->topsize > pad) {
4327*2680e0c0SChristopher Ferris       /* Shrink top space in granularity-size units, keeping at least one */
4328*2680e0c0SChristopher Ferris       size_t unit = mparams.granularity;
4329*2680e0c0SChristopher Ferris       size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
4330*2680e0c0SChristopher Ferris                       SIZE_T_ONE) * unit;
4331*2680e0c0SChristopher Ferris       msegmentptr sp = segment_holding(m, (char*)m->top);
4332*2680e0c0SChristopher Ferris 
4333*2680e0c0SChristopher Ferris       if (!is_extern_segment(sp)) {
4334*2680e0c0SChristopher Ferris         if (is_mmapped_segment(sp)) {
4335*2680e0c0SChristopher Ferris           if (HAVE_MMAP &&
4336*2680e0c0SChristopher Ferris               sp->size >= extra &&
4337*2680e0c0SChristopher Ferris               !has_segment_link(m, sp)) { /* can't shrink if pinned */
4338*2680e0c0SChristopher Ferris             size_t newsize = sp->size - extra;
4339*2680e0c0SChristopher Ferris             (void)newsize; /* placate people compiling -Wunused-variable */
4340*2680e0c0SChristopher Ferris             /* Prefer mremap, fall back to munmap */
4341*2680e0c0SChristopher Ferris             if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
4342*2680e0c0SChristopher Ferris                 (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
4343*2680e0c0SChristopher Ferris               released = extra;
4344*2680e0c0SChristopher Ferris             }
4345*2680e0c0SChristopher Ferris           }
4346*2680e0c0SChristopher Ferris         }
4347*2680e0c0SChristopher Ferris         else if (HAVE_MORECORE) {
4348*2680e0c0SChristopher Ferris           if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
4349*2680e0c0SChristopher Ferris             extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
4350*2680e0c0SChristopher Ferris           ACQUIRE_MALLOC_GLOBAL_LOCK();
4351*2680e0c0SChristopher Ferris           {
4352*2680e0c0SChristopher Ferris             /* Make sure end of memory is where we last set it. */
4353*2680e0c0SChristopher Ferris             char* old_br = (char*)(CALL_MORECORE(0));
4354*2680e0c0SChristopher Ferris             if (old_br == sp->base + sp->size) {
4355*2680e0c0SChristopher Ferris               char* rel_br = (char*)(CALL_MORECORE(-extra));
4356*2680e0c0SChristopher Ferris               char* new_br = (char*)(CALL_MORECORE(0));
4357*2680e0c0SChristopher Ferris               if (rel_br != CMFAIL && new_br < old_br)
4358*2680e0c0SChristopher Ferris                 released = old_br - new_br;
4359*2680e0c0SChristopher Ferris             }
4360*2680e0c0SChristopher Ferris           }
4361*2680e0c0SChristopher Ferris           RELEASE_MALLOC_GLOBAL_LOCK();
4362*2680e0c0SChristopher Ferris         }
4363*2680e0c0SChristopher Ferris       }
4364*2680e0c0SChristopher Ferris 
4365*2680e0c0SChristopher Ferris       if (released != 0) {
4366*2680e0c0SChristopher Ferris         sp->size -= released;
4367*2680e0c0SChristopher Ferris         m->footprint -= released;
4368*2680e0c0SChristopher Ferris         init_top(m, m->top, m->topsize - released);
4369*2680e0c0SChristopher Ferris         check_top_chunk(m, m->top);
4370*2680e0c0SChristopher Ferris       }
4371*2680e0c0SChristopher Ferris     }
4372*2680e0c0SChristopher Ferris 
4373*2680e0c0SChristopher Ferris     /* Unmap any unused mmapped segments */
4374*2680e0c0SChristopher Ferris     if (HAVE_MMAP)
4375*2680e0c0SChristopher Ferris       released += release_unused_segments(m);
4376*2680e0c0SChristopher Ferris 
4377*2680e0c0SChristopher Ferris     /* On failure, disable autotrim to avoid repeated failed future calls */
4378*2680e0c0SChristopher Ferris     if (released == 0 && m->topsize > m->trim_check)
4379*2680e0c0SChristopher Ferris       m->trim_check = MAX_SIZE_T;
4380*2680e0c0SChristopher Ferris   }
4381*2680e0c0SChristopher Ferris 
4382*2680e0c0SChristopher Ferris   return (released != 0)? 1 : 0;
4383*2680e0c0SChristopher Ferris }
4384*2680e0c0SChristopher Ferris 
4385*2680e0c0SChristopher Ferris /* Consolidate and bin a chunk. Differs from exported versions
4386*2680e0c0SChristopher Ferris    of free mainly in that the chunk need not be marked as inuse.
4387*2680e0c0SChristopher Ferris */
dispose_chunk(mstate m,mchunkptr p,size_t psize)4388*2680e0c0SChristopher Ferris static void dispose_chunk(mstate m, mchunkptr p, size_t psize) {
4389*2680e0c0SChristopher Ferris   mchunkptr next = chunk_plus_offset(p, psize);
4390*2680e0c0SChristopher Ferris   if (!pinuse(p)) {
4391*2680e0c0SChristopher Ferris     mchunkptr prev;
4392*2680e0c0SChristopher Ferris     size_t prevsize = p->prev_foot;
4393*2680e0c0SChristopher Ferris     if (is_mmapped(p)) {
4394*2680e0c0SChristopher Ferris       psize += prevsize + MMAP_FOOT_PAD;
4395*2680e0c0SChristopher Ferris       if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4396*2680e0c0SChristopher Ferris         m->footprint -= psize;
4397*2680e0c0SChristopher Ferris       return;
4398*2680e0c0SChristopher Ferris     }
4399*2680e0c0SChristopher Ferris     prev = chunk_minus_offset(p, prevsize);
4400*2680e0c0SChristopher Ferris     psize += prevsize;
4401*2680e0c0SChristopher Ferris     p = prev;
4402*2680e0c0SChristopher Ferris     if (RTCHECK(ok_address(m, prev))) { /* consolidate backward */
4403*2680e0c0SChristopher Ferris       if (p != m->dv) {
4404*2680e0c0SChristopher Ferris         unlink_chunk(m, p, prevsize);
4405*2680e0c0SChristopher Ferris       }
4406*2680e0c0SChristopher Ferris       else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4407*2680e0c0SChristopher Ferris         m->dvsize = psize;
4408*2680e0c0SChristopher Ferris         set_free_with_pinuse(p, psize, next);
4409*2680e0c0SChristopher Ferris         return;
4410*2680e0c0SChristopher Ferris       }
4411*2680e0c0SChristopher Ferris     }
4412*2680e0c0SChristopher Ferris     else {
4413*2680e0c0SChristopher Ferris       CORRUPTION_ERROR_ACTION(m);
4414*2680e0c0SChristopher Ferris       return;
4415*2680e0c0SChristopher Ferris     }
4416*2680e0c0SChristopher Ferris   }
4417*2680e0c0SChristopher Ferris   if (RTCHECK(ok_address(m, next))) {
4418*2680e0c0SChristopher Ferris     if (!cinuse(next)) {  /* consolidate forward */
4419*2680e0c0SChristopher Ferris       if (next == m->top) {
4420*2680e0c0SChristopher Ferris         size_t tsize = m->topsize += psize;
4421*2680e0c0SChristopher Ferris         m->top = p;
4422*2680e0c0SChristopher Ferris         p->head = tsize | PINUSE_BIT;
4423*2680e0c0SChristopher Ferris         if (p == m->dv) {
4424*2680e0c0SChristopher Ferris           m->dv = 0;
4425*2680e0c0SChristopher Ferris           m->dvsize = 0;
4426*2680e0c0SChristopher Ferris         }
4427*2680e0c0SChristopher Ferris         return;
4428*2680e0c0SChristopher Ferris       }
4429*2680e0c0SChristopher Ferris       else if (next == m->dv) {
4430*2680e0c0SChristopher Ferris         size_t dsize = m->dvsize += psize;
4431*2680e0c0SChristopher Ferris         m->dv = p;
4432*2680e0c0SChristopher Ferris         set_size_and_pinuse_of_free_chunk(p, dsize);
4433*2680e0c0SChristopher Ferris         return;
4434*2680e0c0SChristopher Ferris       }
4435*2680e0c0SChristopher Ferris       else {
4436*2680e0c0SChristopher Ferris         size_t nsize = chunksize(next);
4437*2680e0c0SChristopher Ferris         psize += nsize;
4438*2680e0c0SChristopher Ferris         unlink_chunk(m, next, nsize);
4439*2680e0c0SChristopher Ferris         set_size_and_pinuse_of_free_chunk(p, psize);
4440*2680e0c0SChristopher Ferris         if (p == m->dv) {
4441*2680e0c0SChristopher Ferris           m->dvsize = psize;
4442*2680e0c0SChristopher Ferris           return;
4443*2680e0c0SChristopher Ferris         }
4444*2680e0c0SChristopher Ferris       }
4445*2680e0c0SChristopher Ferris     }
4446*2680e0c0SChristopher Ferris     else {
4447*2680e0c0SChristopher Ferris       set_free_with_pinuse(p, psize, next);
4448*2680e0c0SChristopher Ferris     }
4449*2680e0c0SChristopher Ferris     insert_chunk(m, p, psize);
4450*2680e0c0SChristopher Ferris   }
4451*2680e0c0SChristopher Ferris   else {
4452*2680e0c0SChristopher Ferris     CORRUPTION_ERROR_ACTION(m);
4453*2680e0c0SChristopher Ferris   }
4454*2680e0c0SChristopher Ferris }
4455*2680e0c0SChristopher Ferris 
4456*2680e0c0SChristopher Ferris /* ---------------------------- malloc --------------------------- */
4457*2680e0c0SChristopher Ferris 
4458*2680e0c0SChristopher Ferris /* allocate a large request from the best fitting chunk in a treebin */
tmalloc_large(mstate m,size_t nb)4459*2680e0c0SChristopher Ferris static void* tmalloc_large(mstate m, size_t nb) {
4460*2680e0c0SChristopher Ferris   tchunkptr v = 0;
4461*2680e0c0SChristopher Ferris   size_t rsize = -nb; /* Unsigned negation */
4462*2680e0c0SChristopher Ferris   tchunkptr t;
4463*2680e0c0SChristopher Ferris   bindex_t idx;
4464*2680e0c0SChristopher Ferris   compute_tree_index(nb, idx);
4465*2680e0c0SChristopher Ferris   if ((t = *treebin_at(m, idx)) != 0) {
4466*2680e0c0SChristopher Ferris     /* Traverse tree for this bin looking for node with size == nb */
4467*2680e0c0SChristopher Ferris     size_t sizebits = nb << leftshift_for_tree_index(idx);
4468*2680e0c0SChristopher Ferris     tchunkptr rst = 0;  /* The deepest untaken right subtree */
4469*2680e0c0SChristopher Ferris     for (;;) {
4470*2680e0c0SChristopher Ferris       tchunkptr rt;
4471*2680e0c0SChristopher Ferris       size_t trem = chunksize(t) - nb;
4472*2680e0c0SChristopher Ferris       if (trem < rsize) {
4473*2680e0c0SChristopher Ferris         v = t;
4474*2680e0c0SChristopher Ferris         if ((rsize = trem) == 0)
4475*2680e0c0SChristopher Ferris           break;
4476*2680e0c0SChristopher Ferris       }
4477*2680e0c0SChristopher Ferris       rt = t->child[1];
4478*2680e0c0SChristopher Ferris       t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
4479*2680e0c0SChristopher Ferris       if (rt != 0 && rt != t)
4480*2680e0c0SChristopher Ferris         rst = rt;
4481*2680e0c0SChristopher Ferris       if (t == 0) {
4482*2680e0c0SChristopher Ferris         t = rst; /* set t to least subtree holding sizes > nb */
4483*2680e0c0SChristopher Ferris         break;
4484*2680e0c0SChristopher Ferris       }
4485*2680e0c0SChristopher Ferris       sizebits <<= 1;
4486*2680e0c0SChristopher Ferris     }
4487*2680e0c0SChristopher Ferris   }
4488*2680e0c0SChristopher Ferris   if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
4489*2680e0c0SChristopher Ferris     binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
4490*2680e0c0SChristopher Ferris     if (leftbits != 0) {
4491*2680e0c0SChristopher Ferris       bindex_t i;
4492*2680e0c0SChristopher Ferris       binmap_t leastbit = least_bit(leftbits);
4493*2680e0c0SChristopher Ferris       compute_bit2idx(leastbit, i);
4494*2680e0c0SChristopher Ferris       t = *treebin_at(m, i);
4495*2680e0c0SChristopher Ferris     }
4496*2680e0c0SChristopher Ferris   }
4497*2680e0c0SChristopher Ferris 
4498*2680e0c0SChristopher Ferris   while (t != 0) { /* find smallest of tree or subtree */
4499*2680e0c0SChristopher Ferris     size_t trem = chunksize(t) - nb;
4500*2680e0c0SChristopher Ferris     if (trem < rsize) {
4501*2680e0c0SChristopher Ferris       rsize = trem;
4502*2680e0c0SChristopher Ferris       v = t;
4503*2680e0c0SChristopher Ferris     }
4504*2680e0c0SChristopher Ferris     t = leftmost_child(t);
4505*2680e0c0SChristopher Ferris   }
4506*2680e0c0SChristopher Ferris 
4507*2680e0c0SChristopher Ferris   /*  If dv is a better fit, return 0 so malloc will use it */
4508*2680e0c0SChristopher Ferris   if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
4509*2680e0c0SChristopher Ferris     if (RTCHECK(ok_address(m, v))) { /* split */
4510*2680e0c0SChristopher Ferris       mchunkptr r = chunk_plus_offset(v, nb);
4511*2680e0c0SChristopher Ferris       assert(chunksize(v) == rsize + nb);
4512*2680e0c0SChristopher Ferris       if (RTCHECK(ok_next(v, r))) {
4513*2680e0c0SChristopher Ferris         unlink_large_chunk(m, v);
4514*2680e0c0SChristopher Ferris         if (rsize < MIN_CHUNK_SIZE)
4515*2680e0c0SChristopher Ferris           set_inuse_and_pinuse(m, v, (rsize + nb));
4516*2680e0c0SChristopher Ferris         else {
4517*2680e0c0SChristopher Ferris           set_size_and_pinuse_of_inuse_chunk(m, v, nb);
4518*2680e0c0SChristopher Ferris           set_size_and_pinuse_of_free_chunk(r, rsize);
4519*2680e0c0SChristopher Ferris           insert_chunk(m, r, rsize);
4520*2680e0c0SChristopher Ferris         }
4521*2680e0c0SChristopher Ferris         return chunk2mem(v);
4522*2680e0c0SChristopher Ferris       }
4523*2680e0c0SChristopher Ferris     }
4524*2680e0c0SChristopher Ferris     CORRUPTION_ERROR_ACTION(m);
4525*2680e0c0SChristopher Ferris   }
4526*2680e0c0SChristopher Ferris   return 0;
4527*2680e0c0SChristopher Ferris }
4528*2680e0c0SChristopher Ferris 
4529*2680e0c0SChristopher Ferris /* allocate a small request from the best fitting chunk in a treebin */
tmalloc_small(mstate m,size_t nb)4530*2680e0c0SChristopher Ferris static void* tmalloc_small(mstate m, size_t nb) {
4531*2680e0c0SChristopher Ferris   tchunkptr t, v;
4532*2680e0c0SChristopher Ferris   size_t rsize;
4533*2680e0c0SChristopher Ferris   bindex_t i;
4534*2680e0c0SChristopher Ferris   binmap_t leastbit = least_bit(m->treemap);
4535*2680e0c0SChristopher Ferris   compute_bit2idx(leastbit, i);
4536*2680e0c0SChristopher Ferris   v = t = *treebin_at(m, i);
4537*2680e0c0SChristopher Ferris   rsize = chunksize(t) - nb;
4538*2680e0c0SChristopher Ferris 
4539*2680e0c0SChristopher Ferris   while ((t = leftmost_child(t)) != 0) {
4540*2680e0c0SChristopher Ferris     size_t trem = chunksize(t) - nb;
4541*2680e0c0SChristopher Ferris     if (trem < rsize) {
4542*2680e0c0SChristopher Ferris       rsize = trem;
4543*2680e0c0SChristopher Ferris       v = t;
4544*2680e0c0SChristopher Ferris     }
4545*2680e0c0SChristopher Ferris   }
4546*2680e0c0SChristopher Ferris 
4547*2680e0c0SChristopher Ferris   if (RTCHECK(ok_address(m, v))) {
4548*2680e0c0SChristopher Ferris     mchunkptr r = chunk_plus_offset(v, nb);
4549*2680e0c0SChristopher Ferris     assert(chunksize(v) == rsize + nb);
4550*2680e0c0SChristopher Ferris     if (RTCHECK(ok_next(v, r))) {
4551*2680e0c0SChristopher Ferris       unlink_large_chunk(m, v);
4552*2680e0c0SChristopher Ferris       if (rsize < MIN_CHUNK_SIZE)
4553*2680e0c0SChristopher Ferris         set_inuse_and_pinuse(m, v, (rsize + nb));
4554*2680e0c0SChristopher Ferris       else {
4555*2680e0c0SChristopher Ferris         set_size_and_pinuse_of_inuse_chunk(m, v, nb);
4556*2680e0c0SChristopher Ferris         set_size_and_pinuse_of_free_chunk(r, rsize);
4557*2680e0c0SChristopher Ferris         replace_dv(m, r, rsize);
4558*2680e0c0SChristopher Ferris       }
4559*2680e0c0SChristopher Ferris       return chunk2mem(v);
4560*2680e0c0SChristopher Ferris     }
4561*2680e0c0SChristopher Ferris   }
4562*2680e0c0SChristopher Ferris 
4563*2680e0c0SChristopher Ferris   CORRUPTION_ERROR_ACTION(m);
4564*2680e0c0SChristopher Ferris   return 0;
4565*2680e0c0SChristopher Ferris }
4566*2680e0c0SChristopher Ferris 
4567*2680e0c0SChristopher Ferris #if !ONLY_MSPACES
4568*2680e0c0SChristopher Ferris 
dlmalloc(size_t bytes)4569*2680e0c0SChristopher Ferris void* dlmalloc(size_t bytes) {
4570*2680e0c0SChristopher Ferris   /*
4571*2680e0c0SChristopher Ferris      Basic algorithm:
4572*2680e0c0SChristopher Ferris      If a small request (< 256 bytes minus per-chunk overhead):
4573*2680e0c0SChristopher Ferris        1. If one exists, use a remainderless chunk in associated smallbin.
4574*2680e0c0SChristopher Ferris           (Remainderless means that there are too few excess bytes to
4575*2680e0c0SChristopher Ferris           represent as a chunk.)
4576*2680e0c0SChristopher Ferris        2. If it is big enough, use the dv chunk, which is normally the
4577*2680e0c0SChristopher Ferris           chunk adjacent to the one used for the most recent small request.
4578*2680e0c0SChristopher Ferris        3. If one exists, split the smallest available chunk in a bin,
4579*2680e0c0SChristopher Ferris           saving remainder in dv.
4580*2680e0c0SChristopher Ferris        4. If it is big enough, use the top chunk.
4581*2680e0c0SChristopher Ferris        5. If available, get memory from system and use it
4582*2680e0c0SChristopher Ferris      Otherwise, for a large request:
4583*2680e0c0SChristopher Ferris        1. Find the smallest available binned chunk that fits, and use it
4584*2680e0c0SChristopher Ferris           if it is better fitting than dv chunk, splitting if necessary.
4585*2680e0c0SChristopher Ferris        2. If better fitting than any binned chunk, use the dv chunk.
4586*2680e0c0SChristopher Ferris        3. If it is big enough, use the top chunk.
4587*2680e0c0SChristopher Ferris        4. If request size >= mmap threshold, try to directly mmap this chunk.
4588*2680e0c0SChristopher Ferris        5. If available, get memory from system and use it
4589*2680e0c0SChristopher Ferris 
4590*2680e0c0SChristopher Ferris      The ugly goto's here ensure that postaction occurs along all paths.
4591*2680e0c0SChristopher Ferris   */
4592*2680e0c0SChristopher Ferris 
4593*2680e0c0SChristopher Ferris #if USE_LOCKS
4594*2680e0c0SChristopher Ferris   ensure_initialization(); /* initialize in sys_alloc if not using locks */
4595*2680e0c0SChristopher Ferris #endif
4596*2680e0c0SChristopher Ferris 
4597*2680e0c0SChristopher Ferris   if (!PREACTION(gm)) {
4598*2680e0c0SChristopher Ferris     void* mem;
4599*2680e0c0SChristopher Ferris     size_t nb;
4600*2680e0c0SChristopher Ferris     if (bytes <= MAX_SMALL_REQUEST) {
4601*2680e0c0SChristopher Ferris       bindex_t idx;
4602*2680e0c0SChristopher Ferris       binmap_t smallbits;
4603*2680e0c0SChristopher Ferris       nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4604*2680e0c0SChristopher Ferris       idx = small_index(nb);
4605*2680e0c0SChristopher Ferris       smallbits = gm->smallmap >> idx;
4606*2680e0c0SChristopher Ferris 
4607*2680e0c0SChristopher Ferris       if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4608*2680e0c0SChristopher Ferris         mchunkptr b, p;
4609*2680e0c0SChristopher Ferris         idx += ~smallbits & 1;       /* Uses next bin if idx empty */
4610*2680e0c0SChristopher Ferris         b = smallbin_at(gm, idx);
4611*2680e0c0SChristopher Ferris         p = b->fd;
4612*2680e0c0SChristopher Ferris         assert(chunksize(p) == small_index2size(idx));
4613*2680e0c0SChristopher Ferris         unlink_first_small_chunk(gm, b, p, idx);
4614*2680e0c0SChristopher Ferris         set_inuse_and_pinuse(gm, p, small_index2size(idx));
4615*2680e0c0SChristopher Ferris         mem = chunk2mem(p);
4616*2680e0c0SChristopher Ferris         check_malloced_chunk(gm, mem, nb);
4617*2680e0c0SChristopher Ferris         goto postaction;
4618*2680e0c0SChristopher Ferris       }
4619*2680e0c0SChristopher Ferris 
4620*2680e0c0SChristopher Ferris       else if (nb > gm->dvsize) {
4621*2680e0c0SChristopher Ferris         if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4622*2680e0c0SChristopher Ferris           mchunkptr b, p, r;
4623*2680e0c0SChristopher Ferris           size_t rsize;
4624*2680e0c0SChristopher Ferris           bindex_t i;
4625*2680e0c0SChristopher Ferris           binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4626*2680e0c0SChristopher Ferris           binmap_t leastbit = least_bit(leftbits);
4627*2680e0c0SChristopher Ferris           compute_bit2idx(leastbit, i);
4628*2680e0c0SChristopher Ferris           b = smallbin_at(gm, i);
4629*2680e0c0SChristopher Ferris           p = b->fd;
4630*2680e0c0SChristopher Ferris           assert(chunksize(p) == small_index2size(i));
4631*2680e0c0SChristopher Ferris           unlink_first_small_chunk(gm, b, p, i);
4632*2680e0c0SChristopher Ferris           rsize = small_index2size(i) - nb;
4633*2680e0c0SChristopher Ferris           /* Fit here cannot be remainderless if 4byte sizes */
4634*2680e0c0SChristopher Ferris           if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4635*2680e0c0SChristopher Ferris             set_inuse_and_pinuse(gm, p, small_index2size(i));
4636*2680e0c0SChristopher Ferris           else {
4637*2680e0c0SChristopher Ferris             set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4638*2680e0c0SChristopher Ferris             r = chunk_plus_offset(p, nb);
4639*2680e0c0SChristopher Ferris             set_size_and_pinuse_of_free_chunk(r, rsize);
4640*2680e0c0SChristopher Ferris             replace_dv(gm, r, rsize);
4641*2680e0c0SChristopher Ferris           }
4642*2680e0c0SChristopher Ferris           mem = chunk2mem(p);
4643*2680e0c0SChristopher Ferris           check_malloced_chunk(gm, mem, nb);
4644*2680e0c0SChristopher Ferris           goto postaction;
4645*2680e0c0SChristopher Ferris         }
4646*2680e0c0SChristopher Ferris 
4647*2680e0c0SChristopher Ferris         else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
4648*2680e0c0SChristopher Ferris           check_malloced_chunk(gm, mem, nb);
4649*2680e0c0SChristopher Ferris           goto postaction;
4650*2680e0c0SChristopher Ferris         }
4651*2680e0c0SChristopher Ferris       }
4652*2680e0c0SChristopher Ferris     }
4653*2680e0c0SChristopher Ferris     else if (bytes >= MAX_REQUEST)
4654*2680e0c0SChristopher Ferris       nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4655*2680e0c0SChristopher Ferris     else {
4656*2680e0c0SChristopher Ferris       nb = pad_request(bytes);
4657*2680e0c0SChristopher Ferris       if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
4658*2680e0c0SChristopher Ferris         check_malloced_chunk(gm, mem, nb);
4659*2680e0c0SChristopher Ferris         goto postaction;
4660*2680e0c0SChristopher Ferris       }
4661*2680e0c0SChristopher Ferris     }
4662*2680e0c0SChristopher Ferris 
4663*2680e0c0SChristopher Ferris     if (nb <= gm->dvsize) {
4664*2680e0c0SChristopher Ferris       size_t rsize = gm->dvsize - nb;
4665*2680e0c0SChristopher Ferris       mchunkptr p = gm->dv;
4666*2680e0c0SChristopher Ferris       if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4667*2680e0c0SChristopher Ferris         mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
4668*2680e0c0SChristopher Ferris         gm->dvsize = rsize;
4669*2680e0c0SChristopher Ferris         set_size_and_pinuse_of_free_chunk(r, rsize);
4670*2680e0c0SChristopher Ferris         set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4671*2680e0c0SChristopher Ferris       }
4672*2680e0c0SChristopher Ferris       else { /* exhaust dv */
4673*2680e0c0SChristopher Ferris         size_t dvs = gm->dvsize;
4674*2680e0c0SChristopher Ferris         gm->dvsize = 0;
4675*2680e0c0SChristopher Ferris         gm->dv = 0;
4676*2680e0c0SChristopher Ferris         set_inuse_and_pinuse(gm, p, dvs);
4677*2680e0c0SChristopher Ferris       }
4678*2680e0c0SChristopher Ferris       mem = chunk2mem(p);
4679*2680e0c0SChristopher Ferris       check_malloced_chunk(gm, mem, nb);
4680*2680e0c0SChristopher Ferris       goto postaction;
4681*2680e0c0SChristopher Ferris     }
4682*2680e0c0SChristopher Ferris 
4683*2680e0c0SChristopher Ferris     else if (nb < gm->topsize) { /* Split top */
4684*2680e0c0SChristopher Ferris       size_t rsize = gm->topsize -= nb;
4685*2680e0c0SChristopher Ferris       mchunkptr p = gm->top;
4686*2680e0c0SChristopher Ferris       mchunkptr r = gm->top = chunk_plus_offset(p, nb);
4687*2680e0c0SChristopher Ferris       r->head = rsize | PINUSE_BIT;
4688*2680e0c0SChristopher Ferris       set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4689*2680e0c0SChristopher Ferris       mem = chunk2mem(p);
4690*2680e0c0SChristopher Ferris       check_top_chunk(gm, gm->top);
4691*2680e0c0SChristopher Ferris       check_malloced_chunk(gm, mem, nb);
4692*2680e0c0SChristopher Ferris       goto postaction;
4693*2680e0c0SChristopher Ferris     }
4694*2680e0c0SChristopher Ferris 
4695*2680e0c0SChristopher Ferris     mem = sys_alloc(gm, nb);
4696*2680e0c0SChristopher Ferris 
4697*2680e0c0SChristopher Ferris   postaction:
4698*2680e0c0SChristopher Ferris     POSTACTION(gm);
4699*2680e0c0SChristopher Ferris     return mem;
4700*2680e0c0SChristopher Ferris   }
4701*2680e0c0SChristopher Ferris 
4702*2680e0c0SChristopher Ferris   return 0;
4703*2680e0c0SChristopher Ferris }
4704*2680e0c0SChristopher Ferris 
4705*2680e0c0SChristopher Ferris /* ---------------------------- free --------------------------- */
4706*2680e0c0SChristopher Ferris 
dlfree(void * mem)4707*2680e0c0SChristopher Ferris void dlfree(void* mem) {
4708*2680e0c0SChristopher Ferris   /*
4709*2680e0c0SChristopher Ferris      Consolidate freed chunks with preceeding or succeeding bordering
4710*2680e0c0SChristopher Ferris      free chunks, if they exist, and then place in a bin.  Intermixed
4711*2680e0c0SChristopher Ferris      with special cases for top, dv, mmapped chunks, and usage errors.
4712*2680e0c0SChristopher Ferris   */
4713*2680e0c0SChristopher Ferris 
4714*2680e0c0SChristopher Ferris   if (mem != 0) {
4715*2680e0c0SChristopher Ferris     mchunkptr p  = mem2chunk(mem);
4716*2680e0c0SChristopher Ferris #if FOOTERS
4717*2680e0c0SChristopher Ferris     mstate fm = get_mstate_for(p);
4718*2680e0c0SChristopher Ferris     if (!ok_magic(fm)) {
4719*2680e0c0SChristopher Ferris       USAGE_ERROR_ACTION(fm, p);
4720*2680e0c0SChristopher Ferris       return;
4721*2680e0c0SChristopher Ferris     }
4722*2680e0c0SChristopher Ferris #else /* FOOTERS */
4723*2680e0c0SChristopher Ferris #define fm gm
4724*2680e0c0SChristopher Ferris #endif /* FOOTERS */
4725*2680e0c0SChristopher Ferris     if (!PREACTION(fm)) {
4726*2680e0c0SChristopher Ferris       check_inuse_chunk(fm, p);
4727*2680e0c0SChristopher Ferris       if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {
4728*2680e0c0SChristopher Ferris         size_t psize = chunksize(p);
4729*2680e0c0SChristopher Ferris         mchunkptr next = chunk_plus_offset(p, psize);
4730*2680e0c0SChristopher Ferris         if (!pinuse(p)) {
4731*2680e0c0SChristopher Ferris           size_t prevsize = p->prev_foot;
4732*2680e0c0SChristopher Ferris           if (is_mmapped(p)) {
4733*2680e0c0SChristopher Ferris             psize += prevsize + MMAP_FOOT_PAD;
4734*2680e0c0SChristopher Ferris             if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4735*2680e0c0SChristopher Ferris               fm->footprint -= psize;
4736*2680e0c0SChristopher Ferris             goto postaction;
4737*2680e0c0SChristopher Ferris           }
4738*2680e0c0SChristopher Ferris           else {
4739*2680e0c0SChristopher Ferris             mchunkptr prev = chunk_minus_offset(p, prevsize);
4740*2680e0c0SChristopher Ferris             psize += prevsize;
4741*2680e0c0SChristopher Ferris             p = prev;
4742*2680e0c0SChristopher Ferris             if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4743*2680e0c0SChristopher Ferris               if (p != fm->dv) {
4744*2680e0c0SChristopher Ferris                 unlink_chunk(fm, p, prevsize);
4745*2680e0c0SChristopher Ferris               }
4746*2680e0c0SChristopher Ferris               else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4747*2680e0c0SChristopher Ferris                 fm->dvsize = psize;
4748*2680e0c0SChristopher Ferris                 set_free_with_pinuse(p, psize, next);
4749*2680e0c0SChristopher Ferris                 goto postaction;
4750*2680e0c0SChristopher Ferris               }
4751*2680e0c0SChristopher Ferris             }
4752*2680e0c0SChristopher Ferris             else
4753*2680e0c0SChristopher Ferris               goto erroraction;
4754*2680e0c0SChristopher Ferris           }
4755*2680e0c0SChristopher Ferris         }
4756*2680e0c0SChristopher Ferris 
4757*2680e0c0SChristopher Ferris         if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4758*2680e0c0SChristopher Ferris           if (!cinuse(next)) {  /* consolidate forward */
4759*2680e0c0SChristopher Ferris             if (next == fm->top) {
4760*2680e0c0SChristopher Ferris               size_t tsize = fm->topsize += psize;
4761*2680e0c0SChristopher Ferris               fm->top = p;
4762*2680e0c0SChristopher Ferris               p->head = tsize | PINUSE_BIT;
4763*2680e0c0SChristopher Ferris               if (p == fm->dv) {
4764*2680e0c0SChristopher Ferris                 fm->dv = 0;
4765*2680e0c0SChristopher Ferris                 fm->dvsize = 0;
4766*2680e0c0SChristopher Ferris               }
4767*2680e0c0SChristopher Ferris               if (should_trim(fm, tsize))
4768*2680e0c0SChristopher Ferris                 sys_trim(fm, 0);
4769*2680e0c0SChristopher Ferris               goto postaction;
4770*2680e0c0SChristopher Ferris             }
4771*2680e0c0SChristopher Ferris             else if (next == fm->dv) {
4772*2680e0c0SChristopher Ferris               size_t dsize = fm->dvsize += psize;
4773*2680e0c0SChristopher Ferris               fm->dv = p;
4774*2680e0c0SChristopher Ferris               set_size_and_pinuse_of_free_chunk(p, dsize);
4775*2680e0c0SChristopher Ferris               goto postaction;
4776*2680e0c0SChristopher Ferris             }
4777*2680e0c0SChristopher Ferris             else {
4778*2680e0c0SChristopher Ferris               size_t nsize = chunksize(next);
4779*2680e0c0SChristopher Ferris               psize += nsize;
4780*2680e0c0SChristopher Ferris               unlink_chunk(fm, next, nsize);
4781*2680e0c0SChristopher Ferris               set_size_and_pinuse_of_free_chunk(p, psize);
4782*2680e0c0SChristopher Ferris               if (p == fm->dv) {
4783*2680e0c0SChristopher Ferris                 fm->dvsize = psize;
4784*2680e0c0SChristopher Ferris                 goto postaction;
4785*2680e0c0SChristopher Ferris               }
4786*2680e0c0SChristopher Ferris             }
4787*2680e0c0SChristopher Ferris           }
4788*2680e0c0SChristopher Ferris           else
4789*2680e0c0SChristopher Ferris             set_free_with_pinuse(p, psize, next);
4790*2680e0c0SChristopher Ferris 
4791*2680e0c0SChristopher Ferris           if (is_small(psize)) {
4792*2680e0c0SChristopher Ferris             insert_small_chunk(fm, p, psize);
4793*2680e0c0SChristopher Ferris             check_free_chunk(fm, p);
4794*2680e0c0SChristopher Ferris           }
4795*2680e0c0SChristopher Ferris           else {
4796*2680e0c0SChristopher Ferris             tchunkptr tp = (tchunkptr)p;
4797*2680e0c0SChristopher Ferris             insert_large_chunk(fm, tp, psize);
4798*2680e0c0SChristopher Ferris             check_free_chunk(fm, p);
4799*2680e0c0SChristopher Ferris             if (--fm->release_checks == 0)
4800*2680e0c0SChristopher Ferris               release_unused_segments(fm);
4801*2680e0c0SChristopher Ferris           }
4802*2680e0c0SChristopher Ferris           goto postaction;
4803*2680e0c0SChristopher Ferris         }
4804*2680e0c0SChristopher Ferris       }
4805*2680e0c0SChristopher Ferris     erroraction:
4806*2680e0c0SChristopher Ferris       USAGE_ERROR_ACTION(fm, p);
4807*2680e0c0SChristopher Ferris     postaction:
4808*2680e0c0SChristopher Ferris       POSTACTION(fm);
4809*2680e0c0SChristopher Ferris     }
4810*2680e0c0SChristopher Ferris   }
4811*2680e0c0SChristopher Ferris #if !FOOTERS
4812*2680e0c0SChristopher Ferris #undef fm
4813*2680e0c0SChristopher Ferris #endif /* FOOTERS */
4814*2680e0c0SChristopher Ferris }
4815*2680e0c0SChristopher Ferris 
dlcalloc(size_t n_elements,size_t elem_size)4816*2680e0c0SChristopher Ferris void* dlcalloc(size_t n_elements, size_t elem_size) {
4817*2680e0c0SChristopher Ferris   void* mem;
4818*2680e0c0SChristopher Ferris   size_t req = 0;
4819*2680e0c0SChristopher Ferris   if (n_elements != 0) {
4820*2680e0c0SChristopher Ferris     req = n_elements * elem_size;
4821*2680e0c0SChristopher Ferris     if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4822*2680e0c0SChristopher Ferris         (req / n_elements != elem_size))
4823*2680e0c0SChristopher Ferris       req = MAX_SIZE_T; /* force downstream failure on overflow */
4824*2680e0c0SChristopher Ferris   }
4825*2680e0c0SChristopher Ferris   mem = dlmalloc(req);
4826*2680e0c0SChristopher Ferris   if (mem != 0) {
4827*2680e0c0SChristopher Ferris     mchunkptr p = mem2chunk(mem);
4828*2680e0c0SChristopher Ferris     if (calloc_must_clear(p)) {
4829*2680e0c0SChristopher Ferris       /* Make sure to clear all of the buffer, not just the requested size. */
4830*2680e0c0SChristopher Ferris       memset(mem, 0, chunksize(p) - overhead_for(p));
4831*2680e0c0SChristopher Ferris     }
4832*2680e0c0SChristopher Ferris   }
4833*2680e0c0SChristopher Ferris   return mem;
4834*2680e0c0SChristopher Ferris }
4835*2680e0c0SChristopher Ferris 
4836*2680e0c0SChristopher Ferris #endif /* !ONLY_MSPACES */
4837*2680e0c0SChristopher Ferris 
4838*2680e0c0SChristopher Ferris /* ------------ Internal support for realloc, memalign, etc -------------- */
4839*2680e0c0SChristopher Ferris 
4840*2680e0c0SChristopher Ferris /* Try to realloc; only in-place unless can_move true */
try_realloc_chunk(mstate m,mchunkptr p,size_t nb,int can_move)4841*2680e0c0SChristopher Ferris static mchunkptr try_realloc_chunk(mstate m, mchunkptr p, size_t nb,
4842*2680e0c0SChristopher Ferris                                    int can_move) {
4843*2680e0c0SChristopher Ferris   mchunkptr newp = 0;
4844*2680e0c0SChristopher Ferris   size_t oldsize = chunksize(p);
4845*2680e0c0SChristopher Ferris   mchunkptr next = chunk_plus_offset(p, oldsize);
4846*2680e0c0SChristopher Ferris   if (RTCHECK(ok_address(m, p) && ok_inuse(p) &&
4847*2680e0c0SChristopher Ferris               ok_next(p, next) && ok_pinuse(next))) {
4848*2680e0c0SChristopher Ferris     if (is_mmapped(p)) {
4849*2680e0c0SChristopher Ferris       newp = mmap_resize(m, p, nb, can_move);
4850*2680e0c0SChristopher Ferris     }
4851*2680e0c0SChristopher Ferris     else if (oldsize >= nb) {             /* already big enough */
4852*2680e0c0SChristopher Ferris       size_t rsize = oldsize - nb;
4853*2680e0c0SChristopher Ferris       if (rsize >= MIN_CHUNK_SIZE) {      /* split off remainder */
4854*2680e0c0SChristopher Ferris         mchunkptr r = chunk_plus_offset(p, nb);
4855*2680e0c0SChristopher Ferris         set_inuse(m, p, nb);
4856*2680e0c0SChristopher Ferris         set_inuse(m, r, rsize);
4857*2680e0c0SChristopher Ferris         dispose_chunk(m, r, rsize);
4858*2680e0c0SChristopher Ferris       }
4859*2680e0c0SChristopher Ferris       newp = p;
4860*2680e0c0SChristopher Ferris     }
4861*2680e0c0SChristopher Ferris     else if (next == m->top) {  /* extend into top */
4862*2680e0c0SChristopher Ferris       if (oldsize + m->topsize > nb) {
4863*2680e0c0SChristopher Ferris         size_t newsize = oldsize + m->topsize;
4864*2680e0c0SChristopher Ferris         size_t newtopsize = newsize - nb;
4865*2680e0c0SChristopher Ferris         mchunkptr newtop = chunk_plus_offset(p, nb);
4866*2680e0c0SChristopher Ferris         set_inuse(m, p, nb);
4867*2680e0c0SChristopher Ferris         newtop->head = newtopsize |PINUSE_BIT;
4868*2680e0c0SChristopher Ferris         m->top = newtop;
4869*2680e0c0SChristopher Ferris         m->topsize = newtopsize;
4870*2680e0c0SChristopher Ferris         newp = p;
4871*2680e0c0SChristopher Ferris       }
4872*2680e0c0SChristopher Ferris     }
4873*2680e0c0SChristopher Ferris     else if (next == m->dv) { /* extend into dv */
4874*2680e0c0SChristopher Ferris       size_t dvs = m->dvsize;
4875*2680e0c0SChristopher Ferris       if (oldsize + dvs >= nb) {
4876*2680e0c0SChristopher Ferris         size_t dsize = oldsize + dvs - nb;
4877*2680e0c0SChristopher Ferris         if (dsize >= MIN_CHUNK_SIZE) {
4878*2680e0c0SChristopher Ferris           mchunkptr r = chunk_plus_offset(p, nb);
4879*2680e0c0SChristopher Ferris           mchunkptr n = chunk_plus_offset(r, dsize);
4880*2680e0c0SChristopher Ferris           set_inuse(m, p, nb);
4881*2680e0c0SChristopher Ferris           set_size_and_pinuse_of_free_chunk(r, dsize);
4882*2680e0c0SChristopher Ferris           clear_pinuse(n);
4883*2680e0c0SChristopher Ferris           m->dvsize = dsize;
4884*2680e0c0SChristopher Ferris           m->dv = r;
4885*2680e0c0SChristopher Ferris         }
4886*2680e0c0SChristopher Ferris         else { /* exhaust dv */
4887*2680e0c0SChristopher Ferris           size_t newsize = oldsize + dvs;
4888*2680e0c0SChristopher Ferris           set_inuse(m, p, newsize);
4889*2680e0c0SChristopher Ferris           m->dvsize = 0;
4890*2680e0c0SChristopher Ferris           m->dv = 0;
4891*2680e0c0SChristopher Ferris         }
4892*2680e0c0SChristopher Ferris         newp = p;
4893*2680e0c0SChristopher Ferris       }
4894*2680e0c0SChristopher Ferris     }
4895*2680e0c0SChristopher Ferris     else if (!cinuse(next)) { /* extend into next free chunk */
4896*2680e0c0SChristopher Ferris       size_t nextsize = chunksize(next);
4897*2680e0c0SChristopher Ferris       if (oldsize + nextsize >= nb) {
4898*2680e0c0SChristopher Ferris         size_t rsize = oldsize + nextsize - nb;
4899*2680e0c0SChristopher Ferris         unlink_chunk(m, next, nextsize);
4900*2680e0c0SChristopher Ferris         if (rsize < MIN_CHUNK_SIZE) {
4901*2680e0c0SChristopher Ferris           size_t newsize = oldsize + nextsize;
4902*2680e0c0SChristopher Ferris           set_inuse(m, p, newsize);
4903*2680e0c0SChristopher Ferris         }
4904*2680e0c0SChristopher Ferris         else {
4905*2680e0c0SChristopher Ferris           mchunkptr r = chunk_plus_offset(p, nb);
4906*2680e0c0SChristopher Ferris           set_inuse(m, p, nb);
4907*2680e0c0SChristopher Ferris           set_inuse(m, r, rsize);
4908*2680e0c0SChristopher Ferris           dispose_chunk(m, r, rsize);
4909*2680e0c0SChristopher Ferris         }
4910*2680e0c0SChristopher Ferris         newp = p;
4911*2680e0c0SChristopher Ferris       }
4912*2680e0c0SChristopher Ferris     }
4913*2680e0c0SChristopher Ferris   }
4914*2680e0c0SChristopher Ferris   else {
4915*2680e0c0SChristopher Ferris     USAGE_ERROR_ACTION(m, chunk2mem(p));
4916*2680e0c0SChristopher Ferris   }
4917*2680e0c0SChristopher Ferris   return newp;
4918*2680e0c0SChristopher Ferris }
4919*2680e0c0SChristopher Ferris 
internal_memalign(mstate m,size_t alignment,size_t bytes)4920*2680e0c0SChristopher Ferris static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
4921*2680e0c0SChristopher Ferris   void* mem = 0;
4922*2680e0c0SChristopher Ferris   if (alignment <  MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
4923*2680e0c0SChristopher Ferris     alignment = MIN_CHUNK_SIZE;
4924*2680e0c0SChristopher Ferris   if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
4925*2680e0c0SChristopher Ferris     size_t a = MALLOC_ALIGNMENT << 1;
4926*2680e0c0SChristopher Ferris     while (a < alignment) a <<= 1;
4927*2680e0c0SChristopher Ferris     alignment = a;
4928*2680e0c0SChristopher Ferris   }
4929*2680e0c0SChristopher Ferris   if (bytes >= MAX_REQUEST - alignment) {
4930*2680e0c0SChristopher Ferris     if (m != 0)  { /* Test isn't needed but avoids compiler warning */
4931*2680e0c0SChristopher Ferris       MALLOC_FAILURE_ACTION;
4932*2680e0c0SChristopher Ferris     }
4933*2680e0c0SChristopher Ferris   }
4934*2680e0c0SChristopher Ferris   else {
4935*2680e0c0SChristopher Ferris     size_t nb = request2size(bytes);
4936*2680e0c0SChristopher Ferris     size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
4937*2680e0c0SChristopher Ferris     mem = internal_malloc(m, req);
4938*2680e0c0SChristopher Ferris     if (mem != 0) {
4939*2680e0c0SChristopher Ferris       mchunkptr p = mem2chunk(mem);
4940*2680e0c0SChristopher Ferris       if (PREACTION(m))
4941*2680e0c0SChristopher Ferris         return 0;
4942*2680e0c0SChristopher Ferris       if ((((size_t)(mem)) & (alignment - 1)) != 0) { /* misaligned */
4943*2680e0c0SChristopher Ferris         /*
4944*2680e0c0SChristopher Ferris           Find an aligned spot inside chunk.  Since we need to give
4945*2680e0c0SChristopher Ferris           back leading space in a chunk of at least MIN_CHUNK_SIZE, if
4946*2680e0c0SChristopher Ferris           the first calculation places us at a spot with less than
4947*2680e0c0SChristopher Ferris           MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
4948*2680e0c0SChristopher Ferris           We've allocated enough total room so that this is always
4949*2680e0c0SChristopher Ferris           possible.
4950*2680e0c0SChristopher Ferris         */
4951*2680e0c0SChristopher Ferris         char* br = (char*)mem2chunk((size_t)(((size_t)((char*)mem + alignment -
4952*2680e0c0SChristopher Ferris                                                        SIZE_T_ONE)) &
4953*2680e0c0SChristopher Ferris                                              -alignment));
4954*2680e0c0SChristopher Ferris         char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
4955*2680e0c0SChristopher Ferris           br : br+alignment;
4956*2680e0c0SChristopher Ferris         mchunkptr newp = (mchunkptr)pos;
4957*2680e0c0SChristopher Ferris         size_t leadsize = pos - (char*)(p);
4958*2680e0c0SChristopher Ferris         size_t newsize = chunksize(p) - leadsize;
4959*2680e0c0SChristopher Ferris 
4960*2680e0c0SChristopher Ferris         if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
4961*2680e0c0SChristopher Ferris           newp->prev_foot = p->prev_foot + leadsize;
4962*2680e0c0SChristopher Ferris           newp->head = newsize;
4963*2680e0c0SChristopher Ferris         }
4964*2680e0c0SChristopher Ferris         else { /* Otherwise, give back leader, use the rest */
4965*2680e0c0SChristopher Ferris           set_inuse(m, newp, newsize);
4966*2680e0c0SChristopher Ferris           set_inuse(m, p, leadsize);
4967*2680e0c0SChristopher Ferris           dispose_chunk(m, p, leadsize);
4968*2680e0c0SChristopher Ferris         }
4969*2680e0c0SChristopher Ferris         p = newp;
4970*2680e0c0SChristopher Ferris       }
4971*2680e0c0SChristopher Ferris 
4972*2680e0c0SChristopher Ferris       /* Give back spare room at the end */
4973*2680e0c0SChristopher Ferris       if (!is_mmapped(p)) {
4974*2680e0c0SChristopher Ferris         size_t size = chunksize(p);
4975*2680e0c0SChristopher Ferris         if (size > nb + MIN_CHUNK_SIZE) {
4976*2680e0c0SChristopher Ferris           size_t remainder_size = size - nb;
4977*2680e0c0SChristopher Ferris           mchunkptr remainder = chunk_plus_offset(p, nb);
4978*2680e0c0SChristopher Ferris           set_inuse(m, p, nb);
4979*2680e0c0SChristopher Ferris           set_inuse(m, remainder, remainder_size);
4980*2680e0c0SChristopher Ferris           dispose_chunk(m, remainder, remainder_size);
4981*2680e0c0SChristopher Ferris         }
4982*2680e0c0SChristopher Ferris       }
4983*2680e0c0SChristopher Ferris 
4984*2680e0c0SChristopher Ferris       mem = chunk2mem(p);
4985*2680e0c0SChristopher Ferris       assert (chunksize(p) >= nb);
4986*2680e0c0SChristopher Ferris       assert(((size_t)mem & (alignment - 1)) == 0);
4987*2680e0c0SChristopher Ferris       check_inuse_chunk(m, p);
4988*2680e0c0SChristopher Ferris       POSTACTION(m);
4989*2680e0c0SChristopher Ferris     }
4990*2680e0c0SChristopher Ferris   }
4991*2680e0c0SChristopher Ferris   return mem;
4992*2680e0c0SChristopher Ferris }
4993*2680e0c0SChristopher Ferris 
4994*2680e0c0SChristopher Ferris /*
4995*2680e0c0SChristopher Ferris   Common support for independent_X routines, handling
4996*2680e0c0SChristopher Ferris     all of the combinations that can result.
4997*2680e0c0SChristopher Ferris   The opts arg has:
4998*2680e0c0SChristopher Ferris     bit 0 set if all elements are same size (using sizes[0])
4999*2680e0c0SChristopher Ferris     bit 1 set if elements should be zeroed
5000*2680e0c0SChristopher Ferris */
ialloc(mstate m,size_t n_elements,size_t * sizes,int opts,void * chunks[])5001*2680e0c0SChristopher Ferris static void** ialloc(mstate m,
5002*2680e0c0SChristopher Ferris                      size_t n_elements,
5003*2680e0c0SChristopher Ferris                      size_t* sizes,
5004*2680e0c0SChristopher Ferris                      int opts,
5005*2680e0c0SChristopher Ferris                      void* chunks[]) {
5006*2680e0c0SChristopher Ferris 
5007*2680e0c0SChristopher Ferris   size_t    element_size;   /* chunksize of each element, if all same */
5008*2680e0c0SChristopher Ferris   size_t    contents_size;  /* total size of elements */
5009*2680e0c0SChristopher Ferris   size_t    array_size;     /* request size of pointer array */
5010*2680e0c0SChristopher Ferris   void*     mem;            /* malloced aggregate space */
5011*2680e0c0SChristopher Ferris   mchunkptr p;              /* corresponding chunk */
5012*2680e0c0SChristopher Ferris   size_t    remainder_size; /* remaining bytes while splitting */
5013*2680e0c0SChristopher Ferris   void**    marray;         /* either "chunks" or malloced ptr array */
5014*2680e0c0SChristopher Ferris   mchunkptr array_chunk;    /* chunk for malloced ptr array */
5015*2680e0c0SChristopher Ferris   flag_t    was_enabled;    /* to disable mmap */
5016*2680e0c0SChristopher Ferris   size_t    size;
5017*2680e0c0SChristopher Ferris   size_t    i;
5018*2680e0c0SChristopher Ferris 
5019*2680e0c0SChristopher Ferris   ensure_initialization();
5020*2680e0c0SChristopher Ferris   /* compute array length, if needed */
5021*2680e0c0SChristopher Ferris   if (chunks != 0) {
5022*2680e0c0SChristopher Ferris     if (n_elements == 0)
5023*2680e0c0SChristopher Ferris       return chunks; /* nothing to do */
5024*2680e0c0SChristopher Ferris     marray = chunks;
5025*2680e0c0SChristopher Ferris     array_size = 0;
5026*2680e0c0SChristopher Ferris   }
5027*2680e0c0SChristopher Ferris   else {
5028*2680e0c0SChristopher Ferris     /* if empty req, must still return chunk representing empty array */
5029*2680e0c0SChristopher Ferris     if (n_elements == 0)
5030*2680e0c0SChristopher Ferris       return (void**)internal_malloc(m, 0);
5031*2680e0c0SChristopher Ferris     marray = 0;
5032*2680e0c0SChristopher Ferris     array_size = request2size(n_elements * (sizeof(void*)));
5033*2680e0c0SChristopher Ferris   }
5034*2680e0c0SChristopher Ferris 
5035*2680e0c0SChristopher Ferris   /* compute total element size */
5036*2680e0c0SChristopher Ferris   if (opts & 0x1) { /* all-same-size */
5037*2680e0c0SChristopher Ferris     element_size = request2size(*sizes);
5038*2680e0c0SChristopher Ferris     contents_size = n_elements * element_size;
5039*2680e0c0SChristopher Ferris   }
5040*2680e0c0SChristopher Ferris   else { /* add up all the sizes */
5041*2680e0c0SChristopher Ferris     element_size = 0;
5042*2680e0c0SChristopher Ferris     contents_size = 0;
5043*2680e0c0SChristopher Ferris     for (i = 0; i != n_elements; ++i)
5044*2680e0c0SChristopher Ferris       contents_size += request2size(sizes[i]);
5045*2680e0c0SChristopher Ferris   }
5046*2680e0c0SChristopher Ferris 
5047*2680e0c0SChristopher Ferris   size = contents_size + array_size;
5048*2680e0c0SChristopher Ferris 
5049*2680e0c0SChristopher Ferris   /*
5050*2680e0c0SChristopher Ferris      Allocate the aggregate chunk.  First disable direct-mmapping so
5051*2680e0c0SChristopher Ferris      malloc won't use it, since we would not be able to later
5052*2680e0c0SChristopher Ferris      free/realloc space internal to a segregated mmap region.
5053*2680e0c0SChristopher Ferris   */
5054*2680e0c0SChristopher Ferris   was_enabled = use_mmap(m);
5055*2680e0c0SChristopher Ferris   disable_mmap(m);
5056*2680e0c0SChristopher Ferris   mem = internal_malloc(m, size - CHUNK_OVERHEAD);
5057*2680e0c0SChristopher Ferris   if (was_enabled)
5058*2680e0c0SChristopher Ferris     enable_mmap(m);
5059*2680e0c0SChristopher Ferris   if (mem == 0)
5060*2680e0c0SChristopher Ferris     return 0;
5061*2680e0c0SChristopher Ferris 
5062*2680e0c0SChristopher Ferris   if (PREACTION(m)) return 0;
5063*2680e0c0SChristopher Ferris   p = mem2chunk(mem);
5064*2680e0c0SChristopher Ferris   remainder_size = chunksize(p);
5065*2680e0c0SChristopher Ferris 
5066*2680e0c0SChristopher Ferris   assert(!is_mmapped(p));
5067*2680e0c0SChristopher Ferris 
5068*2680e0c0SChristopher Ferris   if (opts & 0x2) {       /* optionally clear the elements */
5069*2680e0c0SChristopher Ferris     memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
5070*2680e0c0SChristopher Ferris   }
5071*2680e0c0SChristopher Ferris 
5072*2680e0c0SChristopher Ferris   /* If not provided, allocate the pointer array as final part of chunk */
5073*2680e0c0SChristopher Ferris   if (marray == 0) {
5074*2680e0c0SChristopher Ferris     size_t  array_chunk_size;
5075*2680e0c0SChristopher Ferris     array_chunk = chunk_plus_offset(p, contents_size);
5076*2680e0c0SChristopher Ferris     array_chunk_size = remainder_size - contents_size;
5077*2680e0c0SChristopher Ferris     marray = (void**) (chunk2mem(array_chunk));
5078*2680e0c0SChristopher Ferris     set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
5079*2680e0c0SChristopher Ferris     remainder_size = contents_size;
5080*2680e0c0SChristopher Ferris   }
5081*2680e0c0SChristopher Ferris 
5082*2680e0c0SChristopher Ferris   /* split out elements */
5083*2680e0c0SChristopher Ferris   for (i = 0; ; ++i) {
5084*2680e0c0SChristopher Ferris     marray[i] = chunk2mem(p);
5085*2680e0c0SChristopher Ferris     if (i != n_elements-1) {
5086*2680e0c0SChristopher Ferris       if (element_size != 0)
5087*2680e0c0SChristopher Ferris         size = element_size;
5088*2680e0c0SChristopher Ferris       else
5089*2680e0c0SChristopher Ferris         size = request2size(sizes[i]);
5090*2680e0c0SChristopher Ferris       remainder_size -= size;
5091*2680e0c0SChristopher Ferris       set_size_and_pinuse_of_inuse_chunk(m, p, size);
5092*2680e0c0SChristopher Ferris       p = chunk_plus_offset(p, size);
5093*2680e0c0SChristopher Ferris     }
5094*2680e0c0SChristopher Ferris     else { /* the final element absorbs any overallocation slop */
5095*2680e0c0SChristopher Ferris       set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
5096*2680e0c0SChristopher Ferris       break;
5097*2680e0c0SChristopher Ferris     }
5098*2680e0c0SChristopher Ferris   }
5099*2680e0c0SChristopher Ferris 
5100*2680e0c0SChristopher Ferris #if DEBUG
5101*2680e0c0SChristopher Ferris   if (marray != chunks) {
5102*2680e0c0SChristopher Ferris     /* final element must have exactly exhausted chunk */
5103*2680e0c0SChristopher Ferris     if (element_size != 0) {
5104*2680e0c0SChristopher Ferris       assert(remainder_size == element_size);
5105*2680e0c0SChristopher Ferris     }
5106*2680e0c0SChristopher Ferris     else {
5107*2680e0c0SChristopher Ferris       assert(remainder_size == request2size(sizes[i]));
5108*2680e0c0SChristopher Ferris     }
5109*2680e0c0SChristopher Ferris     check_inuse_chunk(m, mem2chunk(marray));
5110*2680e0c0SChristopher Ferris   }
5111*2680e0c0SChristopher Ferris   for (i = 0; i != n_elements; ++i)
5112*2680e0c0SChristopher Ferris     check_inuse_chunk(m, mem2chunk(marray[i]));
5113*2680e0c0SChristopher Ferris 
5114*2680e0c0SChristopher Ferris #endif /* DEBUG */
5115*2680e0c0SChristopher Ferris 
5116*2680e0c0SChristopher Ferris   POSTACTION(m);
5117*2680e0c0SChristopher Ferris   return marray;
5118*2680e0c0SChristopher Ferris }
5119*2680e0c0SChristopher Ferris 
5120*2680e0c0SChristopher Ferris /* Try to free all pointers in the given array.
5121*2680e0c0SChristopher Ferris    Note: this could be made faster, by delaying consolidation,
5122*2680e0c0SChristopher Ferris    at the price of disabling some user integrity checks, We
5123*2680e0c0SChristopher Ferris    still optimize some consolidations by combining adjacent
5124*2680e0c0SChristopher Ferris    chunks before freeing, which will occur often if allocated
5125*2680e0c0SChristopher Ferris    with ialloc or the array is sorted.
5126*2680e0c0SChristopher Ferris */
internal_bulk_free(mstate m,void * array[],size_t nelem)5127*2680e0c0SChristopher Ferris static size_t internal_bulk_free(mstate m, void* array[], size_t nelem) {
5128*2680e0c0SChristopher Ferris   size_t unfreed = 0;
5129*2680e0c0SChristopher Ferris   if (!PREACTION(m)) {
5130*2680e0c0SChristopher Ferris     void** a;
5131*2680e0c0SChristopher Ferris     void** fence = &(array[nelem]);
5132*2680e0c0SChristopher Ferris     for (a = array; a != fence; ++a) {
5133*2680e0c0SChristopher Ferris       void* mem = *a;
5134*2680e0c0SChristopher Ferris       if (mem != 0) {
5135*2680e0c0SChristopher Ferris         mchunkptr p = mem2chunk(mem);
5136*2680e0c0SChristopher Ferris         size_t psize = chunksize(p);
5137*2680e0c0SChristopher Ferris #if FOOTERS
5138*2680e0c0SChristopher Ferris         if (get_mstate_for(p) != m) {
5139*2680e0c0SChristopher Ferris           ++unfreed;
5140*2680e0c0SChristopher Ferris           continue;
5141*2680e0c0SChristopher Ferris         }
5142*2680e0c0SChristopher Ferris #endif
5143*2680e0c0SChristopher Ferris         check_inuse_chunk(m, p);
5144*2680e0c0SChristopher Ferris         *a = 0;
5145*2680e0c0SChristopher Ferris         if (RTCHECK(ok_address(m, p) && ok_inuse(p))) {
5146*2680e0c0SChristopher Ferris           void ** b = a + 1; /* try to merge with next chunk */
5147*2680e0c0SChristopher Ferris           mchunkptr next = next_chunk(p);
5148*2680e0c0SChristopher Ferris           if (b != fence && *b == chunk2mem(next)) {
5149*2680e0c0SChristopher Ferris             size_t newsize = chunksize(next) + psize;
5150*2680e0c0SChristopher Ferris             set_inuse(m, p, newsize);
5151*2680e0c0SChristopher Ferris             *b = chunk2mem(p);
5152*2680e0c0SChristopher Ferris           }
5153*2680e0c0SChristopher Ferris           else
5154*2680e0c0SChristopher Ferris             dispose_chunk(m, p, psize);
5155*2680e0c0SChristopher Ferris         }
5156*2680e0c0SChristopher Ferris         else {
5157*2680e0c0SChristopher Ferris           CORRUPTION_ERROR_ACTION(m);
5158*2680e0c0SChristopher Ferris           break;
5159*2680e0c0SChristopher Ferris         }
5160*2680e0c0SChristopher Ferris       }
5161*2680e0c0SChristopher Ferris     }
5162*2680e0c0SChristopher Ferris     if (should_trim(m, m->topsize))
5163*2680e0c0SChristopher Ferris       sys_trim(m, 0);
5164*2680e0c0SChristopher Ferris     POSTACTION(m);
5165*2680e0c0SChristopher Ferris   }
5166*2680e0c0SChristopher Ferris   return unfreed;
5167*2680e0c0SChristopher Ferris }
5168*2680e0c0SChristopher Ferris 
5169*2680e0c0SChristopher Ferris /* Traversal */
5170*2680e0c0SChristopher Ferris #if MALLOC_INSPECT_ALL
internal_inspect_all(mstate m,void (* handler)(void * start,void * end,size_t used_bytes,void * callback_arg),void * arg)5171*2680e0c0SChristopher Ferris static void internal_inspect_all(mstate m,
5172*2680e0c0SChristopher Ferris                                  void(*handler)(void *start,
5173*2680e0c0SChristopher Ferris                                                 void *end,
5174*2680e0c0SChristopher Ferris                                                 size_t used_bytes,
5175*2680e0c0SChristopher Ferris                                                 void* callback_arg),
5176*2680e0c0SChristopher Ferris                                  void* arg) {
5177*2680e0c0SChristopher Ferris   if (is_initialized(m)) {
5178*2680e0c0SChristopher Ferris     mchunkptr top = m->top;
5179*2680e0c0SChristopher Ferris     msegmentptr s;
5180*2680e0c0SChristopher Ferris     for (s = &m->seg; s != 0; s = s->next) {
5181*2680e0c0SChristopher Ferris       mchunkptr q = align_as_chunk(s->base);
5182*2680e0c0SChristopher Ferris       while (segment_holds(s, q) && q->head != FENCEPOST_HEAD) {
5183*2680e0c0SChristopher Ferris         mchunkptr next = next_chunk(q);
5184*2680e0c0SChristopher Ferris         size_t sz = chunksize(q);
5185*2680e0c0SChristopher Ferris         size_t used;
5186*2680e0c0SChristopher Ferris         void* start;
5187*2680e0c0SChristopher Ferris         if (is_inuse(q)) {
5188*2680e0c0SChristopher Ferris           used = sz - CHUNK_OVERHEAD; /* must not be mmapped */
5189*2680e0c0SChristopher Ferris           start = chunk2mem(q);
5190*2680e0c0SChristopher Ferris         }
5191*2680e0c0SChristopher Ferris         else {
5192*2680e0c0SChristopher Ferris           used = 0;
5193*2680e0c0SChristopher Ferris           if (is_small(sz)) {     /* offset by possible bookkeeping */
5194*2680e0c0SChristopher Ferris             start = (void*)((char*)q + sizeof(struct malloc_chunk));
5195*2680e0c0SChristopher Ferris           }
5196*2680e0c0SChristopher Ferris           else {
5197*2680e0c0SChristopher Ferris             start = (void*)((char*)q + sizeof(struct malloc_tree_chunk));
5198*2680e0c0SChristopher Ferris           }
5199*2680e0c0SChristopher Ferris         }
5200*2680e0c0SChristopher Ferris         if (start < (void*)next)  /* skip if all space is bookkeeping */
5201*2680e0c0SChristopher Ferris           handler(start, next, used, arg);
5202*2680e0c0SChristopher Ferris         if (q == top)
5203*2680e0c0SChristopher Ferris           break;
5204*2680e0c0SChristopher Ferris         q = next;
5205*2680e0c0SChristopher Ferris       }
5206*2680e0c0SChristopher Ferris     }
5207*2680e0c0SChristopher Ferris   }
5208*2680e0c0SChristopher Ferris }
5209*2680e0c0SChristopher Ferris #endif /* MALLOC_INSPECT_ALL */
5210*2680e0c0SChristopher Ferris 
5211*2680e0c0SChristopher Ferris /* ------------------ Exported realloc, memalign, etc -------------------- */
5212*2680e0c0SChristopher Ferris 
5213*2680e0c0SChristopher Ferris #if !ONLY_MSPACES
5214*2680e0c0SChristopher Ferris 
dlrealloc(void * oldmem,size_t bytes)5215*2680e0c0SChristopher Ferris void* dlrealloc(void* oldmem, size_t bytes) {
5216*2680e0c0SChristopher Ferris   void* mem = 0;
5217*2680e0c0SChristopher Ferris   if (oldmem == 0) {
5218*2680e0c0SChristopher Ferris     mem = dlmalloc(bytes);
5219*2680e0c0SChristopher Ferris   }
5220*2680e0c0SChristopher Ferris   else if (bytes >= MAX_REQUEST) {
5221*2680e0c0SChristopher Ferris     MALLOC_FAILURE_ACTION;
5222*2680e0c0SChristopher Ferris   }
5223*2680e0c0SChristopher Ferris #ifdef REALLOC_ZERO_BYTES_FREES
5224*2680e0c0SChristopher Ferris   else if (bytes == 0) {
5225*2680e0c0SChristopher Ferris     dlfree(oldmem);
5226*2680e0c0SChristopher Ferris   }
5227*2680e0c0SChristopher Ferris #endif /* REALLOC_ZERO_BYTES_FREES */
5228*2680e0c0SChristopher Ferris   else {
5229*2680e0c0SChristopher Ferris     size_t nb = request2size(bytes);
5230*2680e0c0SChristopher Ferris     mchunkptr oldp = mem2chunk(oldmem);
5231*2680e0c0SChristopher Ferris #if ! FOOTERS
5232*2680e0c0SChristopher Ferris     mstate m = gm;
5233*2680e0c0SChristopher Ferris #else /* FOOTERS */
5234*2680e0c0SChristopher Ferris     mstate m = get_mstate_for(oldp);
5235*2680e0c0SChristopher Ferris     if (!ok_magic(m)) {
5236*2680e0c0SChristopher Ferris       USAGE_ERROR_ACTION(m, oldmem);
5237*2680e0c0SChristopher Ferris       return 0;
5238*2680e0c0SChristopher Ferris     }
5239*2680e0c0SChristopher Ferris #endif /* FOOTERS */
5240*2680e0c0SChristopher Ferris     if (!PREACTION(m)) {
5241*2680e0c0SChristopher Ferris       mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1);
5242*2680e0c0SChristopher Ferris       POSTACTION(m);
5243*2680e0c0SChristopher Ferris       if (newp != 0) {
5244*2680e0c0SChristopher Ferris         check_inuse_chunk(m, newp);
5245*2680e0c0SChristopher Ferris         mem = chunk2mem(newp);
5246*2680e0c0SChristopher Ferris       }
5247*2680e0c0SChristopher Ferris       else {
5248*2680e0c0SChristopher Ferris         mem = internal_malloc(m, bytes);
5249*2680e0c0SChristopher Ferris         if (mem != 0) {
5250*2680e0c0SChristopher Ferris           size_t oc = chunksize(oldp) - overhead_for(oldp);
5251*2680e0c0SChristopher Ferris           memcpy(mem, oldmem, (oc < bytes)? oc : bytes);
5252*2680e0c0SChristopher Ferris           internal_free(m, oldmem);
5253*2680e0c0SChristopher Ferris         }
5254*2680e0c0SChristopher Ferris       }
5255*2680e0c0SChristopher Ferris     }
5256*2680e0c0SChristopher Ferris   }
5257*2680e0c0SChristopher Ferris   return mem;
5258*2680e0c0SChristopher Ferris }
5259*2680e0c0SChristopher Ferris 
dlrealloc_in_place(void * oldmem,size_t bytes)5260*2680e0c0SChristopher Ferris void* dlrealloc_in_place(void* oldmem, size_t bytes) {
5261*2680e0c0SChristopher Ferris   void* mem = 0;
5262*2680e0c0SChristopher Ferris   if (oldmem != 0) {
5263*2680e0c0SChristopher Ferris     if (bytes >= MAX_REQUEST) {
5264*2680e0c0SChristopher Ferris       MALLOC_FAILURE_ACTION;
5265*2680e0c0SChristopher Ferris     }
5266*2680e0c0SChristopher Ferris     else {
5267*2680e0c0SChristopher Ferris       size_t nb = request2size(bytes);
5268*2680e0c0SChristopher Ferris       mchunkptr oldp = mem2chunk(oldmem);
5269*2680e0c0SChristopher Ferris #if ! FOOTERS
5270*2680e0c0SChristopher Ferris       mstate m = gm;
5271*2680e0c0SChristopher Ferris #else /* FOOTERS */
5272*2680e0c0SChristopher Ferris       mstate m = get_mstate_for(oldp);
5273*2680e0c0SChristopher Ferris       if (!ok_magic(m)) {
5274*2680e0c0SChristopher Ferris         USAGE_ERROR_ACTION(m, oldmem);
5275*2680e0c0SChristopher Ferris         return 0;
5276*2680e0c0SChristopher Ferris       }
5277*2680e0c0SChristopher Ferris #endif /* FOOTERS */
5278*2680e0c0SChristopher Ferris       if (!PREACTION(m)) {
5279*2680e0c0SChristopher Ferris         mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0);
5280*2680e0c0SChristopher Ferris         POSTACTION(m);
5281*2680e0c0SChristopher Ferris         if (newp == oldp) {
5282*2680e0c0SChristopher Ferris           check_inuse_chunk(m, newp);
5283*2680e0c0SChristopher Ferris           mem = oldmem;
5284*2680e0c0SChristopher Ferris         }
5285*2680e0c0SChristopher Ferris       }
5286*2680e0c0SChristopher Ferris     }
5287*2680e0c0SChristopher Ferris   }
5288*2680e0c0SChristopher Ferris   return mem;
5289*2680e0c0SChristopher Ferris }
5290*2680e0c0SChristopher Ferris 
dlmemalign(size_t alignment,size_t bytes)5291*2680e0c0SChristopher Ferris void* dlmemalign(size_t alignment, size_t bytes) {
5292*2680e0c0SChristopher Ferris   if (alignment <= MALLOC_ALIGNMENT) {
5293*2680e0c0SChristopher Ferris     return dlmalloc(bytes);
5294*2680e0c0SChristopher Ferris   }
5295*2680e0c0SChristopher Ferris   return internal_memalign(gm, alignment, bytes);
5296*2680e0c0SChristopher Ferris }
5297*2680e0c0SChristopher Ferris 
dlposix_memalign(void ** pp,size_t alignment,size_t bytes)5298*2680e0c0SChristopher Ferris int dlposix_memalign(void** pp, size_t alignment, size_t bytes) {
5299*2680e0c0SChristopher Ferris   void* mem = 0;
5300*2680e0c0SChristopher Ferris   if (alignment == MALLOC_ALIGNMENT)
5301*2680e0c0SChristopher Ferris     mem = dlmalloc(bytes);
5302*2680e0c0SChristopher Ferris   else {
5303*2680e0c0SChristopher Ferris     size_t d = alignment / sizeof(void*);
5304*2680e0c0SChristopher Ferris     size_t r = alignment % sizeof(void*);
5305*2680e0c0SChristopher Ferris     if (r != 0 || d == 0 || (d & (d-SIZE_T_ONE)) != 0)
5306*2680e0c0SChristopher Ferris       return EINVAL;
5307*2680e0c0SChristopher Ferris     else if (bytes <= MAX_REQUEST - alignment) {
5308*2680e0c0SChristopher Ferris       if (alignment <  MIN_CHUNK_SIZE)
5309*2680e0c0SChristopher Ferris         alignment = MIN_CHUNK_SIZE;
5310*2680e0c0SChristopher Ferris       mem = internal_memalign(gm, alignment, bytes);
5311*2680e0c0SChristopher Ferris     }
5312*2680e0c0SChristopher Ferris   }
5313*2680e0c0SChristopher Ferris   if (mem == 0)
5314*2680e0c0SChristopher Ferris     return ENOMEM;
5315*2680e0c0SChristopher Ferris   else {
5316*2680e0c0SChristopher Ferris     *pp = mem;
5317*2680e0c0SChristopher Ferris     return 0;
5318*2680e0c0SChristopher Ferris   }
5319*2680e0c0SChristopher Ferris }
5320*2680e0c0SChristopher Ferris 
dlvalloc(size_t bytes)5321*2680e0c0SChristopher Ferris void* dlvalloc(size_t bytes) {
5322*2680e0c0SChristopher Ferris   size_t pagesz;
5323*2680e0c0SChristopher Ferris   ensure_initialization();
5324*2680e0c0SChristopher Ferris   pagesz = mparams.page_size;
5325*2680e0c0SChristopher Ferris   return dlmemalign(pagesz, bytes);
5326*2680e0c0SChristopher Ferris }
5327*2680e0c0SChristopher Ferris 
5328*2680e0c0SChristopher Ferris /* BEGIN android-changed: added overflow check */
dlpvalloc(size_t bytes)5329*2680e0c0SChristopher Ferris void* dlpvalloc(size_t bytes) {
5330*2680e0c0SChristopher Ferris   size_t pagesz;
5331*2680e0c0SChristopher Ferris   size_t size;
5332*2680e0c0SChristopher Ferris   ensure_initialization();
5333*2680e0c0SChristopher Ferris   pagesz = mparams.page_size;
5334*2680e0c0SChristopher Ferris   size = (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE);
5335*2680e0c0SChristopher Ferris   if (size < bytes) {
5336*2680e0c0SChristopher Ferris     return NULL;
5337*2680e0c0SChristopher Ferris   }
5338*2680e0c0SChristopher Ferris   return dlmemalign(pagesz, size);
5339*2680e0c0SChristopher Ferris }
5340*2680e0c0SChristopher Ferris /* END android-change */
5341*2680e0c0SChristopher Ferris 
dlindependent_calloc(size_t n_elements,size_t elem_size,void * chunks[])5342*2680e0c0SChristopher Ferris void** dlindependent_calloc(size_t n_elements, size_t elem_size,
5343*2680e0c0SChristopher Ferris                             void* chunks[]) {
5344*2680e0c0SChristopher Ferris   size_t sz = elem_size; /* serves as 1-element array */
5345*2680e0c0SChristopher Ferris   return ialloc(gm, n_elements, &sz, 3, chunks);
5346*2680e0c0SChristopher Ferris }
5347*2680e0c0SChristopher Ferris 
dlindependent_comalloc(size_t n_elements,size_t sizes[],void * chunks[])5348*2680e0c0SChristopher Ferris void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
5349*2680e0c0SChristopher Ferris                               void* chunks[]) {
5350*2680e0c0SChristopher Ferris   return ialloc(gm, n_elements, sizes, 0, chunks);
5351*2680e0c0SChristopher Ferris }
5352*2680e0c0SChristopher Ferris 
dlbulk_free(void * array[],size_t nelem)5353*2680e0c0SChristopher Ferris size_t dlbulk_free(void* array[], size_t nelem) {
5354*2680e0c0SChristopher Ferris   return internal_bulk_free(gm, array, nelem);
5355*2680e0c0SChristopher Ferris }
5356*2680e0c0SChristopher Ferris 
5357*2680e0c0SChristopher Ferris #if MALLOC_INSPECT_ALL
dlmalloc_inspect_all(void (* handler)(void * start,void * end,size_t used_bytes,void * callback_arg),void * arg)5358*2680e0c0SChristopher Ferris void dlmalloc_inspect_all(void(*handler)(void *start,
5359*2680e0c0SChristopher Ferris                                          void *end,
5360*2680e0c0SChristopher Ferris                                          size_t used_bytes,
5361*2680e0c0SChristopher Ferris                                          void* callback_arg),
5362*2680e0c0SChristopher Ferris                           void* arg) {
5363*2680e0c0SChristopher Ferris   ensure_initialization();
5364*2680e0c0SChristopher Ferris   if (!PREACTION(gm)) {
5365*2680e0c0SChristopher Ferris     internal_inspect_all(gm, handler, arg);
5366*2680e0c0SChristopher Ferris     POSTACTION(gm);
5367*2680e0c0SChristopher Ferris   }
5368*2680e0c0SChristopher Ferris }
5369*2680e0c0SChristopher Ferris #endif /* MALLOC_INSPECT_ALL */
5370*2680e0c0SChristopher Ferris 
dlmalloc_trim(size_t pad)5371*2680e0c0SChristopher Ferris int dlmalloc_trim(size_t pad) {
5372*2680e0c0SChristopher Ferris   int result = 0;
5373*2680e0c0SChristopher Ferris   ensure_initialization();
5374*2680e0c0SChristopher Ferris   if (!PREACTION(gm)) {
5375*2680e0c0SChristopher Ferris     result = sys_trim(gm, pad);
5376*2680e0c0SChristopher Ferris     POSTACTION(gm);
5377*2680e0c0SChristopher Ferris   }
5378*2680e0c0SChristopher Ferris   return result;
5379*2680e0c0SChristopher Ferris }
5380*2680e0c0SChristopher Ferris 
dlmalloc_footprint(void)5381*2680e0c0SChristopher Ferris size_t dlmalloc_footprint(void) {
5382*2680e0c0SChristopher Ferris   return gm->footprint;
5383*2680e0c0SChristopher Ferris }
5384*2680e0c0SChristopher Ferris 
dlmalloc_max_footprint(void)5385*2680e0c0SChristopher Ferris size_t dlmalloc_max_footprint(void) {
5386*2680e0c0SChristopher Ferris   return gm->max_footprint;
5387*2680e0c0SChristopher Ferris }
5388*2680e0c0SChristopher Ferris 
dlmalloc_footprint_limit(void)5389*2680e0c0SChristopher Ferris size_t dlmalloc_footprint_limit(void) {
5390*2680e0c0SChristopher Ferris   size_t maf = gm->footprint_limit;
5391*2680e0c0SChristopher Ferris   return maf == 0 ? MAX_SIZE_T : maf;
5392*2680e0c0SChristopher Ferris }
5393*2680e0c0SChristopher Ferris 
dlmalloc_set_footprint_limit(size_t bytes)5394*2680e0c0SChristopher Ferris size_t dlmalloc_set_footprint_limit(size_t bytes) {
5395*2680e0c0SChristopher Ferris   size_t result;  /* invert sense of 0 */
5396*2680e0c0SChristopher Ferris   if (bytes == 0)
5397*2680e0c0SChristopher Ferris     result = granularity_align(1); /* Use minimal size */
5398*2680e0c0SChristopher Ferris   if (bytes == MAX_SIZE_T)
5399*2680e0c0SChristopher Ferris     result = 0;                    /* disable */
5400*2680e0c0SChristopher Ferris   else
5401*2680e0c0SChristopher Ferris     result = granularity_align(bytes);
5402*2680e0c0SChristopher Ferris   return gm->footprint_limit = result;
5403*2680e0c0SChristopher Ferris }
5404*2680e0c0SChristopher Ferris 
5405*2680e0c0SChristopher Ferris #if !NO_MALLINFO
dlmallinfo(void)5406*2680e0c0SChristopher Ferris struct mallinfo dlmallinfo(void) {
5407*2680e0c0SChristopher Ferris   return internal_mallinfo(gm);
5408*2680e0c0SChristopher Ferris }
5409*2680e0c0SChristopher Ferris #endif /* NO_MALLINFO */
5410*2680e0c0SChristopher Ferris 
5411*2680e0c0SChristopher Ferris #if !NO_MALLOC_STATS
dlmalloc_stats()5412*2680e0c0SChristopher Ferris void dlmalloc_stats() {
5413*2680e0c0SChristopher Ferris   internal_malloc_stats(gm);
5414*2680e0c0SChristopher Ferris }
5415*2680e0c0SChristopher Ferris #endif /* NO_MALLOC_STATS */
5416*2680e0c0SChristopher Ferris 
dlmallopt(int param_number,int value)5417*2680e0c0SChristopher Ferris int dlmallopt(int param_number, int value) {
5418*2680e0c0SChristopher Ferris   return change_mparam(param_number, value);
5419*2680e0c0SChristopher Ferris }
5420*2680e0c0SChristopher Ferris 
5421*2680e0c0SChristopher Ferris /* BEGIN android-changed: added const */
dlmalloc_usable_size(const void * mem)5422*2680e0c0SChristopher Ferris size_t dlmalloc_usable_size(const void* mem) {
5423*2680e0c0SChristopher Ferris /* END android-change */
5424*2680e0c0SChristopher Ferris   if (mem != 0) {
5425*2680e0c0SChristopher Ferris     mchunkptr p = mem2chunk(mem);
5426*2680e0c0SChristopher Ferris     if (is_inuse(p))
5427*2680e0c0SChristopher Ferris       return chunksize(p) - overhead_for(p);
5428*2680e0c0SChristopher Ferris   }
5429*2680e0c0SChristopher Ferris   return 0;
5430*2680e0c0SChristopher Ferris }
5431*2680e0c0SChristopher Ferris 
5432*2680e0c0SChristopher Ferris #endif /* !ONLY_MSPACES */
5433*2680e0c0SChristopher Ferris 
5434*2680e0c0SChristopher Ferris /* ----------------------------- user mspaces ---------------------------- */
5435*2680e0c0SChristopher Ferris 
5436*2680e0c0SChristopher Ferris #if MSPACES
5437*2680e0c0SChristopher Ferris 
init_user_mstate(char * tbase,size_t tsize)5438*2680e0c0SChristopher Ferris static mstate init_user_mstate(char* tbase, size_t tsize) {
5439*2680e0c0SChristopher Ferris   size_t msize = pad_request(sizeof(struct malloc_state));
5440*2680e0c0SChristopher Ferris   mchunkptr mn;
5441*2680e0c0SChristopher Ferris   mchunkptr msp = align_as_chunk(tbase);
5442*2680e0c0SChristopher Ferris   mstate m = (mstate)(chunk2mem(msp));
5443*2680e0c0SChristopher Ferris   memset(m, 0, msize);
5444*2680e0c0SChristopher Ferris   (void)INITIAL_LOCK(&m->mutex);
5445*2680e0c0SChristopher Ferris   msp->head = (msize|INUSE_BITS);
5446*2680e0c0SChristopher Ferris   m->seg.base = m->least_addr = tbase;
5447*2680e0c0SChristopher Ferris   m->seg.size = m->footprint = m->max_footprint = tsize;
5448*2680e0c0SChristopher Ferris   m->magic = mparams.magic;
5449*2680e0c0SChristopher Ferris   m->release_checks = MAX_RELEASE_CHECK_RATE;
5450*2680e0c0SChristopher Ferris   m->mflags = mparams.default_mflags;
5451*2680e0c0SChristopher Ferris   m->extp = 0;
5452*2680e0c0SChristopher Ferris   m->exts = 0;
5453*2680e0c0SChristopher Ferris   disable_contiguous(m);
5454*2680e0c0SChristopher Ferris   init_bins(m);
5455*2680e0c0SChristopher Ferris   mn = next_chunk(mem2chunk(m));
5456*2680e0c0SChristopher Ferris   init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
5457*2680e0c0SChristopher Ferris   check_top_chunk(m, m->top);
5458*2680e0c0SChristopher Ferris   return m;
5459*2680e0c0SChristopher Ferris }
5460*2680e0c0SChristopher Ferris 
create_mspace(size_t capacity,int locked)5461*2680e0c0SChristopher Ferris mspace create_mspace(size_t capacity, int locked) {
5462*2680e0c0SChristopher Ferris   mstate m = 0;
5463*2680e0c0SChristopher Ferris   size_t msize;
5464*2680e0c0SChristopher Ferris   ensure_initialization();
5465*2680e0c0SChristopher Ferris   msize = pad_request(sizeof(struct malloc_state));
5466*2680e0c0SChristopher Ferris   if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
5467*2680e0c0SChristopher Ferris     size_t rs = ((capacity == 0)? mparams.granularity :
5468*2680e0c0SChristopher Ferris                  (capacity + TOP_FOOT_SIZE + msize));
5469*2680e0c0SChristopher Ferris     size_t tsize = granularity_align(rs);
5470*2680e0c0SChristopher Ferris     char* tbase = (char*)(CALL_MMAP(tsize));
5471*2680e0c0SChristopher Ferris     if (tbase != CMFAIL) {
5472*2680e0c0SChristopher Ferris       m = init_user_mstate(tbase, tsize);
5473*2680e0c0SChristopher Ferris       m->seg.sflags = USE_MMAP_BIT;
5474*2680e0c0SChristopher Ferris       set_lock(m, locked);
5475*2680e0c0SChristopher Ferris     }
5476*2680e0c0SChristopher Ferris   }
5477*2680e0c0SChristopher Ferris   return (mspace)m;
5478*2680e0c0SChristopher Ferris }
5479*2680e0c0SChristopher Ferris 
create_mspace_with_base(void * base,size_t capacity,int locked)5480*2680e0c0SChristopher Ferris mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
5481*2680e0c0SChristopher Ferris   mstate m = 0;
5482*2680e0c0SChristopher Ferris   size_t msize;
5483*2680e0c0SChristopher Ferris   ensure_initialization();
5484*2680e0c0SChristopher Ferris   msize = pad_request(sizeof(struct malloc_state));
5485*2680e0c0SChristopher Ferris   if (capacity > msize + TOP_FOOT_SIZE &&
5486*2680e0c0SChristopher Ferris       capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
5487*2680e0c0SChristopher Ferris     m = init_user_mstate((char*)base, capacity);
5488*2680e0c0SChristopher Ferris     m->seg.sflags = EXTERN_BIT;
5489*2680e0c0SChristopher Ferris     set_lock(m, locked);
5490*2680e0c0SChristopher Ferris   }
5491*2680e0c0SChristopher Ferris   return (mspace)m;
5492*2680e0c0SChristopher Ferris }
5493*2680e0c0SChristopher Ferris 
mspace_track_large_chunks(mspace msp,int enable)5494*2680e0c0SChristopher Ferris int mspace_track_large_chunks(mspace msp, int enable) {
5495*2680e0c0SChristopher Ferris   int ret = 0;
5496*2680e0c0SChristopher Ferris   mstate ms = (mstate)msp;
5497*2680e0c0SChristopher Ferris   if (!PREACTION(ms)) {
5498*2680e0c0SChristopher Ferris     if (!use_mmap(ms)) {
5499*2680e0c0SChristopher Ferris       ret = 1;
5500*2680e0c0SChristopher Ferris     }
5501*2680e0c0SChristopher Ferris     if (!enable) {
5502*2680e0c0SChristopher Ferris       enable_mmap(ms);
5503*2680e0c0SChristopher Ferris     } else {
5504*2680e0c0SChristopher Ferris       disable_mmap(ms);
5505*2680e0c0SChristopher Ferris     }
5506*2680e0c0SChristopher Ferris     POSTACTION(ms);
5507*2680e0c0SChristopher Ferris   }
5508*2680e0c0SChristopher Ferris   return ret;
5509*2680e0c0SChristopher Ferris }
5510*2680e0c0SChristopher Ferris 
destroy_mspace(mspace msp)5511*2680e0c0SChristopher Ferris size_t destroy_mspace(mspace msp) {
5512*2680e0c0SChristopher Ferris   size_t freed = 0;
5513*2680e0c0SChristopher Ferris   mstate ms = (mstate)msp;
5514*2680e0c0SChristopher Ferris   if (ok_magic(ms)) {
5515*2680e0c0SChristopher Ferris     msegmentptr sp = &ms->seg;
5516*2680e0c0SChristopher Ferris     (void)DESTROY_LOCK(&ms->mutex); /* destroy before unmapped */
5517*2680e0c0SChristopher Ferris     while (sp != 0) {
5518*2680e0c0SChristopher Ferris       char* base = sp->base;
5519*2680e0c0SChristopher Ferris       size_t size = sp->size;
5520*2680e0c0SChristopher Ferris       flag_t flag = sp->sflags;
5521*2680e0c0SChristopher Ferris       (void)base; /* placate people compiling -Wunused-variable */
5522*2680e0c0SChristopher Ferris       sp = sp->next;
5523*2680e0c0SChristopher Ferris       if ((flag & USE_MMAP_BIT) && !(flag & EXTERN_BIT) &&
5524*2680e0c0SChristopher Ferris           CALL_MUNMAP(base, size) == 0)
5525*2680e0c0SChristopher Ferris         freed += size;
5526*2680e0c0SChristopher Ferris     }
5527*2680e0c0SChristopher Ferris   }
5528*2680e0c0SChristopher Ferris   else {
5529*2680e0c0SChristopher Ferris     USAGE_ERROR_ACTION(ms,ms);
5530*2680e0c0SChristopher Ferris   }
5531*2680e0c0SChristopher Ferris   return freed;
5532*2680e0c0SChristopher Ferris }
5533*2680e0c0SChristopher Ferris 
5534*2680e0c0SChristopher Ferris /*
5535*2680e0c0SChristopher Ferris   mspace versions of routines are near-clones of the global
5536*2680e0c0SChristopher Ferris   versions. This is not so nice but better than the alternatives.
5537*2680e0c0SChristopher Ferris */
5538*2680e0c0SChristopher Ferris 
mspace_malloc(mspace msp,size_t bytes)5539*2680e0c0SChristopher Ferris void* mspace_malloc(mspace msp, size_t bytes) {
5540*2680e0c0SChristopher Ferris   mstate ms = (mstate)msp;
5541*2680e0c0SChristopher Ferris   if (!ok_magic(ms)) {
5542*2680e0c0SChristopher Ferris     USAGE_ERROR_ACTION(ms,ms);
5543*2680e0c0SChristopher Ferris     return 0;
5544*2680e0c0SChristopher Ferris   }
5545*2680e0c0SChristopher Ferris   if (!PREACTION(ms)) {
5546*2680e0c0SChristopher Ferris     void* mem;
5547*2680e0c0SChristopher Ferris     size_t nb;
5548*2680e0c0SChristopher Ferris     if (bytes <= MAX_SMALL_REQUEST) {
5549*2680e0c0SChristopher Ferris       bindex_t idx;
5550*2680e0c0SChristopher Ferris       binmap_t smallbits;
5551*2680e0c0SChristopher Ferris       nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
5552*2680e0c0SChristopher Ferris       idx = small_index(nb);
5553*2680e0c0SChristopher Ferris       smallbits = ms->smallmap >> idx;
5554*2680e0c0SChristopher Ferris 
5555*2680e0c0SChristopher Ferris       if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
5556*2680e0c0SChristopher Ferris         mchunkptr b, p;
5557*2680e0c0SChristopher Ferris         idx += ~smallbits & 1;       /* Uses next bin if idx empty */
5558*2680e0c0SChristopher Ferris         b = smallbin_at(ms, idx);
5559*2680e0c0SChristopher Ferris         p = b->fd;
5560*2680e0c0SChristopher Ferris         assert(chunksize(p) == small_index2size(idx));
5561*2680e0c0SChristopher Ferris         unlink_first_small_chunk(ms, b, p, idx);
5562*2680e0c0SChristopher Ferris         set_inuse_and_pinuse(ms, p, small_index2size(idx));
5563*2680e0c0SChristopher Ferris         mem = chunk2mem(p);
5564*2680e0c0SChristopher Ferris         check_malloced_chunk(ms, mem, nb);
5565*2680e0c0SChristopher Ferris         goto postaction;
5566*2680e0c0SChristopher Ferris       }
5567*2680e0c0SChristopher Ferris 
5568*2680e0c0SChristopher Ferris       else if (nb > ms->dvsize) {
5569*2680e0c0SChristopher Ferris         if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
5570*2680e0c0SChristopher Ferris           mchunkptr b, p, r;
5571*2680e0c0SChristopher Ferris           size_t rsize;
5572*2680e0c0SChristopher Ferris           bindex_t i;
5573*2680e0c0SChristopher Ferris           binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
5574*2680e0c0SChristopher Ferris           binmap_t leastbit = least_bit(leftbits);
5575*2680e0c0SChristopher Ferris           compute_bit2idx(leastbit, i);
5576*2680e0c0SChristopher Ferris           b = smallbin_at(ms, i);
5577*2680e0c0SChristopher Ferris           p = b->fd;
5578*2680e0c0SChristopher Ferris           assert(chunksize(p) == small_index2size(i));
5579*2680e0c0SChristopher Ferris           unlink_first_small_chunk(ms, b, p, i);
5580*2680e0c0SChristopher Ferris           rsize = small_index2size(i) - nb;
5581*2680e0c0SChristopher Ferris           /* Fit here cannot be remainderless if 4byte sizes */
5582*2680e0c0SChristopher Ferris           if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
5583*2680e0c0SChristopher Ferris             set_inuse_and_pinuse(ms, p, small_index2size(i));
5584*2680e0c0SChristopher Ferris           else {
5585*2680e0c0SChristopher Ferris             set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5586*2680e0c0SChristopher Ferris             r = chunk_plus_offset(p, nb);
5587*2680e0c0SChristopher Ferris             set_size_and_pinuse_of_free_chunk(r, rsize);
5588*2680e0c0SChristopher Ferris             replace_dv(ms, r, rsize);
5589*2680e0c0SChristopher Ferris           }
5590*2680e0c0SChristopher Ferris           mem = chunk2mem(p);
5591*2680e0c0SChristopher Ferris           check_malloced_chunk(ms, mem, nb);
5592*2680e0c0SChristopher Ferris           goto postaction;
5593*2680e0c0SChristopher Ferris         }
5594*2680e0c0SChristopher Ferris 
5595*2680e0c0SChristopher Ferris         else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
5596*2680e0c0SChristopher Ferris           check_malloced_chunk(ms, mem, nb);
5597*2680e0c0SChristopher Ferris           goto postaction;
5598*2680e0c0SChristopher Ferris         }
5599*2680e0c0SChristopher Ferris       }
5600*2680e0c0SChristopher Ferris     }
5601*2680e0c0SChristopher Ferris     else if (bytes >= MAX_REQUEST)
5602*2680e0c0SChristopher Ferris       nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
5603*2680e0c0SChristopher Ferris     else {
5604*2680e0c0SChristopher Ferris       nb = pad_request(bytes);
5605*2680e0c0SChristopher Ferris       if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
5606*2680e0c0SChristopher Ferris         check_malloced_chunk(ms, mem, nb);
5607*2680e0c0SChristopher Ferris         goto postaction;
5608*2680e0c0SChristopher Ferris       }
5609*2680e0c0SChristopher Ferris     }
5610*2680e0c0SChristopher Ferris 
5611*2680e0c0SChristopher Ferris     if (nb <= ms->dvsize) {
5612*2680e0c0SChristopher Ferris       size_t rsize = ms->dvsize - nb;
5613*2680e0c0SChristopher Ferris       mchunkptr p = ms->dv;
5614*2680e0c0SChristopher Ferris       if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
5615*2680e0c0SChristopher Ferris         mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
5616*2680e0c0SChristopher Ferris         ms->dvsize = rsize;
5617*2680e0c0SChristopher Ferris         set_size_and_pinuse_of_free_chunk(r, rsize);
5618*2680e0c0SChristopher Ferris         set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5619*2680e0c0SChristopher Ferris       }
5620*2680e0c0SChristopher Ferris       else { /* exhaust dv */
5621*2680e0c0SChristopher Ferris         size_t dvs = ms->dvsize;
5622*2680e0c0SChristopher Ferris         ms->dvsize = 0;
5623*2680e0c0SChristopher Ferris         ms->dv = 0;
5624*2680e0c0SChristopher Ferris         set_inuse_and_pinuse(ms, p, dvs);
5625*2680e0c0SChristopher Ferris       }
5626*2680e0c0SChristopher Ferris       mem = chunk2mem(p);
5627*2680e0c0SChristopher Ferris       check_malloced_chunk(ms, mem, nb);
5628*2680e0c0SChristopher Ferris       goto postaction;
5629*2680e0c0SChristopher Ferris     }
5630*2680e0c0SChristopher Ferris 
5631*2680e0c0SChristopher Ferris     else if (nb < ms->topsize) { /* Split top */
5632*2680e0c0SChristopher Ferris       size_t rsize = ms->topsize -= nb;
5633*2680e0c0SChristopher Ferris       mchunkptr p = ms->top;
5634*2680e0c0SChristopher Ferris       mchunkptr r = ms->top = chunk_plus_offset(p, nb);
5635*2680e0c0SChristopher Ferris       r->head = rsize | PINUSE_BIT;
5636*2680e0c0SChristopher Ferris       set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5637*2680e0c0SChristopher Ferris       mem = chunk2mem(p);
5638*2680e0c0SChristopher Ferris       check_top_chunk(ms, ms->top);
5639*2680e0c0SChristopher Ferris       check_malloced_chunk(ms, mem, nb);
5640*2680e0c0SChristopher Ferris       goto postaction;
5641*2680e0c0SChristopher Ferris     }
5642*2680e0c0SChristopher Ferris 
5643*2680e0c0SChristopher Ferris     mem = sys_alloc(ms, nb);
5644*2680e0c0SChristopher Ferris 
5645*2680e0c0SChristopher Ferris   postaction:
5646*2680e0c0SChristopher Ferris     POSTACTION(ms);
5647*2680e0c0SChristopher Ferris     return mem;
5648*2680e0c0SChristopher Ferris   }
5649*2680e0c0SChristopher Ferris 
5650*2680e0c0SChristopher Ferris   return 0;
5651*2680e0c0SChristopher Ferris }
5652*2680e0c0SChristopher Ferris 
mspace_free(mspace msp,void * mem)5653*2680e0c0SChristopher Ferris void mspace_free(mspace msp, void* mem) {
5654*2680e0c0SChristopher Ferris   if (mem != 0) {
5655*2680e0c0SChristopher Ferris     mchunkptr p  = mem2chunk(mem);
5656*2680e0c0SChristopher Ferris #if FOOTERS
5657*2680e0c0SChristopher Ferris     mstate fm = get_mstate_for(p);
5658*2680e0c0SChristopher Ferris     (void)msp; /* placate people compiling -Wunused */
5659*2680e0c0SChristopher Ferris #else /* FOOTERS */
5660*2680e0c0SChristopher Ferris     mstate fm = (mstate)msp;
5661*2680e0c0SChristopher Ferris #endif /* FOOTERS */
5662*2680e0c0SChristopher Ferris     if (!ok_magic(fm)) {
5663*2680e0c0SChristopher Ferris       USAGE_ERROR_ACTION(fm, p);
5664*2680e0c0SChristopher Ferris       return;
5665*2680e0c0SChristopher Ferris     }
5666*2680e0c0SChristopher Ferris     if (!PREACTION(fm)) {
5667*2680e0c0SChristopher Ferris       check_inuse_chunk(fm, p);
5668*2680e0c0SChristopher Ferris       if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {
5669*2680e0c0SChristopher Ferris         size_t psize = chunksize(p);
5670*2680e0c0SChristopher Ferris         mchunkptr next = chunk_plus_offset(p, psize);
5671*2680e0c0SChristopher Ferris         if (!pinuse(p)) {
5672*2680e0c0SChristopher Ferris           size_t prevsize = p->prev_foot;
5673*2680e0c0SChristopher Ferris           if (is_mmapped(p)) {
5674*2680e0c0SChristopher Ferris             psize += prevsize + MMAP_FOOT_PAD;
5675*2680e0c0SChristopher Ferris             if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
5676*2680e0c0SChristopher Ferris               fm->footprint -= psize;
5677*2680e0c0SChristopher Ferris             goto postaction;
5678*2680e0c0SChristopher Ferris           }
5679*2680e0c0SChristopher Ferris           else {
5680*2680e0c0SChristopher Ferris             mchunkptr prev = chunk_minus_offset(p, prevsize);
5681*2680e0c0SChristopher Ferris             psize += prevsize;
5682*2680e0c0SChristopher Ferris             p = prev;
5683*2680e0c0SChristopher Ferris             if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
5684*2680e0c0SChristopher Ferris               if (p != fm->dv) {
5685*2680e0c0SChristopher Ferris                 unlink_chunk(fm, p, prevsize);
5686*2680e0c0SChristopher Ferris               }
5687*2680e0c0SChristopher Ferris               else if ((next->head & INUSE_BITS) == INUSE_BITS) {
5688*2680e0c0SChristopher Ferris                 fm->dvsize = psize;
5689*2680e0c0SChristopher Ferris                 set_free_with_pinuse(p, psize, next);
5690*2680e0c0SChristopher Ferris                 goto postaction;
5691*2680e0c0SChristopher Ferris               }
5692*2680e0c0SChristopher Ferris             }
5693*2680e0c0SChristopher Ferris             else
5694*2680e0c0SChristopher Ferris               goto erroraction;
5695*2680e0c0SChristopher Ferris           }
5696*2680e0c0SChristopher Ferris         }
5697*2680e0c0SChristopher Ferris 
5698*2680e0c0SChristopher Ferris         if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
5699*2680e0c0SChristopher Ferris           if (!cinuse(next)) {  /* consolidate forward */
5700*2680e0c0SChristopher Ferris             if (next == fm->top) {
5701*2680e0c0SChristopher Ferris               size_t tsize = fm->topsize += psize;
5702*2680e0c0SChristopher Ferris               fm->top = p;
5703*2680e0c0SChristopher Ferris               p->head = tsize | PINUSE_BIT;
5704*2680e0c0SChristopher Ferris               if (p == fm->dv) {
5705*2680e0c0SChristopher Ferris                 fm->dv = 0;
5706*2680e0c0SChristopher Ferris                 fm->dvsize = 0;
5707*2680e0c0SChristopher Ferris               }
5708*2680e0c0SChristopher Ferris               if (should_trim(fm, tsize))
5709*2680e0c0SChristopher Ferris                 sys_trim(fm, 0);
5710*2680e0c0SChristopher Ferris               goto postaction;
5711*2680e0c0SChristopher Ferris             }
5712*2680e0c0SChristopher Ferris             else if (next == fm->dv) {
5713*2680e0c0SChristopher Ferris               size_t dsize = fm->dvsize += psize;
5714*2680e0c0SChristopher Ferris               fm->dv = p;
5715*2680e0c0SChristopher Ferris               set_size_and_pinuse_of_free_chunk(p, dsize);
5716*2680e0c0SChristopher Ferris               goto postaction;
5717*2680e0c0SChristopher Ferris             }
5718*2680e0c0SChristopher Ferris             else {
5719*2680e0c0SChristopher Ferris               size_t nsize = chunksize(next);
5720*2680e0c0SChristopher Ferris               psize += nsize;
5721*2680e0c0SChristopher Ferris               unlink_chunk(fm, next, nsize);
5722*2680e0c0SChristopher Ferris               set_size_and_pinuse_of_free_chunk(p, psize);
5723*2680e0c0SChristopher Ferris               if (p == fm->dv) {
5724*2680e0c0SChristopher Ferris                 fm->dvsize = psize;
5725*2680e0c0SChristopher Ferris                 goto postaction;
5726*2680e0c0SChristopher Ferris               }
5727*2680e0c0SChristopher Ferris             }
5728*2680e0c0SChristopher Ferris           }
5729*2680e0c0SChristopher Ferris           else
5730*2680e0c0SChristopher Ferris             set_free_with_pinuse(p, psize, next);
5731*2680e0c0SChristopher Ferris 
5732*2680e0c0SChristopher Ferris           if (is_small(psize)) {
5733*2680e0c0SChristopher Ferris             insert_small_chunk(fm, p, psize);
5734*2680e0c0SChristopher Ferris             check_free_chunk(fm, p);
5735*2680e0c0SChristopher Ferris           }
5736*2680e0c0SChristopher Ferris           else {
5737*2680e0c0SChristopher Ferris             tchunkptr tp = (tchunkptr)p;
5738*2680e0c0SChristopher Ferris             insert_large_chunk(fm, tp, psize);
5739*2680e0c0SChristopher Ferris             check_free_chunk(fm, p);
5740*2680e0c0SChristopher Ferris             if (--fm->release_checks == 0)
5741*2680e0c0SChristopher Ferris               release_unused_segments(fm);
5742*2680e0c0SChristopher Ferris           }
5743*2680e0c0SChristopher Ferris           goto postaction;
5744*2680e0c0SChristopher Ferris         }
5745*2680e0c0SChristopher Ferris       }
5746*2680e0c0SChristopher Ferris     erroraction:
5747*2680e0c0SChristopher Ferris       USAGE_ERROR_ACTION(fm, p);
5748*2680e0c0SChristopher Ferris     postaction:
5749*2680e0c0SChristopher Ferris       POSTACTION(fm);
5750*2680e0c0SChristopher Ferris     }
5751*2680e0c0SChristopher Ferris   }
5752*2680e0c0SChristopher Ferris }
5753*2680e0c0SChristopher Ferris 
mspace_calloc(mspace msp,size_t n_elements,size_t elem_size)5754*2680e0c0SChristopher Ferris void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
5755*2680e0c0SChristopher Ferris   void* mem;
5756*2680e0c0SChristopher Ferris   size_t req = 0;
5757*2680e0c0SChristopher Ferris   mstate ms = (mstate)msp;
5758*2680e0c0SChristopher Ferris   if (!ok_magic(ms)) {
5759*2680e0c0SChristopher Ferris     USAGE_ERROR_ACTION(ms,ms);
5760*2680e0c0SChristopher Ferris     return 0;
5761*2680e0c0SChristopher Ferris   }
5762*2680e0c0SChristopher Ferris   if (n_elements != 0) {
5763*2680e0c0SChristopher Ferris     req = n_elements * elem_size;
5764*2680e0c0SChristopher Ferris     if (((n_elements | elem_size) & ~(size_t)0xffff) &&
5765*2680e0c0SChristopher Ferris         (req / n_elements != elem_size))
5766*2680e0c0SChristopher Ferris       req = MAX_SIZE_T; /* force downstream failure on overflow */
5767*2680e0c0SChristopher Ferris   }
5768*2680e0c0SChristopher Ferris   mem = internal_malloc(ms, req);
5769*2680e0c0SChristopher Ferris   if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
5770*2680e0c0SChristopher Ferris     memset(mem, 0, req);
5771*2680e0c0SChristopher Ferris   return mem;
5772*2680e0c0SChristopher Ferris }
5773*2680e0c0SChristopher Ferris 
mspace_realloc(mspace msp,void * oldmem,size_t bytes)5774*2680e0c0SChristopher Ferris void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
5775*2680e0c0SChristopher Ferris   void* mem = 0;
5776*2680e0c0SChristopher Ferris   if (oldmem == 0) {
5777*2680e0c0SChristopher Ferris     mem = mspace_malloc(msp, bytes);
5778*2680e0c0SChristopher Ferris   }
5779*2680e0c0SChristopher Ferris   else if (bytes >= MAX_REQUEST) {
5780*2680e0c0SChristopher Ferris     MALLOC_FAILURE_ACTION;
5781*2680e0c0SChristopher Ferris   }
5782*2680e0c0SChristopher Ferris #ifdef REALLOC_ZERO_BYTES_FREES
5783*2680e0c0SChristopher Ferris   else if (bytes == 0) {
5784*2680e0c0SChristopher Ferris     mspace_free(msp, oldmem);
5785*2680e0c0SChristopher Ferris   }
5786*2680e0c0SChristopher Ferris #endif /* REALLOC_ZERO_BYTES_FREES */
5787*2680e0c0SChristopher Ferris   else {
5788*2680e0c0SChristopher Ferris     size_t nb = request2size(bytes);
5789*2680e0c0SChristopher Ferris     mchunkptr oldp = mem2chunk(oldmem);
5790*2680e0c0SChristopher Ferris #if ! FOOTERS
5791*2680e0c0SChristopher Ferris     mstate m = (mstate)msp;
5792*2680e0c0SChristopher Ferris #else /* FOOTERS */
5793*2680e0c0SChristopher Ferris     mstate m = get_mstate_for(oldp);
5794*2680e0c0SChristopher Ferris     if (!ok_magic(m)) {
5795*2680e0c0SChristopher Ferris       USAGE_ERROR_ACTION(m, oldmem);
5796*2680e0c0SChristopher Ferris       return 0;
5797*2680e0c0SChristopher Ferris     }
5798*2680e0c0SChristopher Ferris #endif /* FOOTERS */
5799*2680e0c0SChristopher Ferris     if (!PREACTION(m)) {
5800*2680e0c0SChristopher Ferris       mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1);
5801*2680e0c0SChristopher Ferris       POSTACTION(m);
5802*2680e0c0SChristopher Ferris       if (newp != 0) {
5803*2680e0c0SChristopher Ferris         check_inuse_chunk(m, newp);
5804*2680e0c0SChristopher Ferris         mem = chunk2mem(newp);
5805*2680e0c0SChristopher Ferris       }
5806*2680e0c0SChristopher Ferris       else {
5807*2680e0c0SChristopher Ferris         mem = mspace_malloc(m, bytes);
5808*2680e0c0SChristopher Ferris         if (mem != 0) {
5809*2680e0c0SChristopher Ferris           size_t oc = chunksize(oldp) - overhead_for(oldp);
5810*2680e0c0SChristopher Ferris           memcpy(mem, oldmem, (oc < bytes)? oc : bytes);
5811*2680e0c0SChristopher Ferris           mspace_free(m, oldmem);
5812*2680e0c0SChristopher Ferris         }
5813*2680e0c0SChristopher Ferris       }
5814*2680e0c0SChristopher Ferris     }
5815*2680e0c0SChristopher Ferris   }
5816*2680e0c0SChristopher Ferris   return mem;
5817*2680e0c0SChristopher Ferris }
5818*2680e0c0SChristopher Ferris 
mspace_realloc_in_place(mspace msp,void * oldmem,size_t bytes)5819*2680e0c0SChristopher Ferris void* mspace_realloc_in_place(mspace msp, void* oldmem, size_t bytes) {
5820*2680e0c0SChristopher Ferris   void* mem = 0;
5821*2680e0c0SChristopher Ferris   if (oldmem != 0) {
5822*2680e0c0SChristopher Ferris     if (bytes >= MAX_REQUEST) {
5823*2680e0c0SChristopher Ferris       MALLOC_FAILURE_ACTION;
5824*2680e0c0SChristopher Ferris     }
5825*2680e0c0SChristopher Ferris     else {
5826*2680e0c0SChristopher Ferris       size_t nb = request2size(bytes);
5827*2680e0c0SChristopher Ferris       mchunkptr oldp = mem2chunk(oldmem);
5828*2680e0c0SChristopher Ferris #if ! FOOTERS
5829*2680e0c0SChristopher Ferris       mstate m = (mstate)msp;
5830*2680e0c0SChristopher Ferris #else /* FOOTERS */
5831*2680e0c0SChristopher Ferris       mstate m = get_mstate_for(oldp);
5832*2680e0c0SChristopher Ferris       (void)msp; /* placate people compiling -Wunused */
5833*2680e0c0SChristopher Ferris       if (!ok_magic(m)) {
5834*2680e0c0SChristopher Ferris         USAGE_ERROR_ACTION(m, oldmem);
5835*2680e0c0SChristopher Ferris         return 0;
5836*2680e0c0SChristopher Ferris       }
5837*2680e0c0SChristopher Ferris #endif /* FOOTERS */
5838*2680e0c0SChristopher Ferris       if (!PREACTION(m)) {
5839*2680e0c0SChristopher Ferris         mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0);
5840*2680e0c0SChristopher Ferris         POSTACTION(m);
5841*2680e0c0SChristopher Ferris         if (newp == oldp) {
5842*2680e0c0SChristopher Ferris           check_inuse_chunk(m, newp);
5843*2680e0c0SChristopher Ferris           mem = oldmem;
5844*2680e0c0SChristopher Ferris         }
5845*2680e0c0SChristopher Ferris       }
5846*2680e0c0SChristopher Ferris     }
5847*2680e0c0SChristopher Ferris   }
5848*2680e0c0SChristopher Ferris   return mem;
5849*2680e0c0SChristopher Ferris }
5850*2680e0c0SChristopher Ferris 
mspace_memalign(mspace msp,size_t alignment,size_t bytes)5851*2680e0c0SChristopher Ferris void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
5852*2680e0c0SChristopher Ferris   mstate ms = (mstate)msp;
5853*2680e0c0SChristopher Ferris   if (!ok_magic(ms)) {
5854*2680e0c0SChristopher Ferris     USAGE_ERROR_ACTION(ms,ms);
5855*2680e0c0SChristopher Ferris     return 0;
5856*2680e0c0SChristopher Ferris   }
5857*2680e0c0SChristopher Ferris   if (alignment <= MALLOC_ALIGNMENT)
5858*2680e0c0SChristopher Ferris     return mspace_malloc(msp, bytes);
5859*2680e0c0SChristopher Ferris   return internal_memalign(ms, alignment, bytes);
5860*2680e0c0SChristopher Ferris }
5861*2680e0c0SChristopher Ferris 
mspace_independent_calloc(mspace msp,size_t n_elements,size_t elem_size,void * chunks[])5862*2680e0c0SChristopher Ferris void** mspace_independent_calloc(mspace msp, size_t n_elements,
5863*2680e0c0SChristopher Ferris                                  size_t elem_size, void* chunks[]) {
5864*2680e0c0SChristopher Ferris   size_t sz = elem_size; /* serves as 1-element array */
5865*2680e0c0SChristopher Ferris   mstate ms = (mstate)msp;
5866*2680e0c0SChristopher Ferris   if (!ok_magic(ms)) {
5867*2680e0c0SChristopher Ferris     USAGE_ERROR_ACTION(ms,ms);
5868*2680e0c0SChristopher Ferris     return 0;
5869*2680e0c0SChristopher Ferris   }
5870*2680e0c0SChristopher Ferris   return ialloc(ms, n_elements, &sz, 3, chunks);
5871*2680e0c0SChristopher Ferris }
5872*2680e0c0SChristopher Ferris 
mspace_independent_comalloc(mspace msp,size_t n_elements,size_t sizes[],void * chunks[])5873*2680e0c0SChristopher Ferris void** mspace_independent_comalloc(mspace msp, size_t n_elements,
5874*2680e0c0SChristopher Ferris                                    size_t sizes[], void* chunks[]) {
5875*2680e0c0SChristopher Ferris   mstate ms = (mstate)msp;
5876*2680e0c0SChristopher Ferris   if (!ok_magic(ms)) {
5877*2680e0c0SChristopher Ferris     USAGE_ERROR_ACTION(ms,ms);
5878*2680e0c0SChristopher Ferris     return 0;
5879*2680e0c0SChristopher Ferris   }
5880*2680e0c0SChristopher Ferris   return ialloc(ms, n_elements, sizes, 0, chunks);
5881*2680e0c0SChristopher Ferris }
5882*2680e0c0SChristopher Ferris 
mspace_bulk_free(mspace msp,void * array[],size_t nelem)5883*2680e0c0SChristopher Ferris size_t mspace_bulk_free(mspace msp, void* array[], size_t nelem) {
5884*2680e0c0SChristopher Ferris   return internal_bulk_free((mstate)msp, array, nelem);
5885*2680e0c0SChristopher Ferris }
5886*2680e0c0SChristopher Ferris 
5887*2680e0c0SChristopher Ferris #if MALLOC_INSPECT_ALL
mspace_inspect_all(mspace msp,void (* handler)(void * start,void * end,size_t used_bytes,void * callback_arg),void * arg)5888*2680e0c0SChristopher Ferris void mspace_inspect_all(mspace msp,
5889*2680e0c0SChristopher Ferris                         void(*handler)(void *start,
5890*2680e0c0SChristopher Ferris                                        void *end,
5891*2680e0c0SChristopher Ferris                                        size_t used_bytes,
5892*2680e0c0SChristopher Ferris                                        void* callback_arg),
5893*2680e0c0SChristopher Ferris                         void* arg) {
5894*2680e0c0SChristopher Ferris   mstate ms = (mstate)msp;
5895*2680e0c0SChristopher Ferris   if (ok_magic(ms)) {
5896*2680e0c0SChristopher Ferris     if (!PREACTION(ms)) {
5897*2680e0c0SChristopher Ferris       internal_inspect_all(ms, handler, arg);
5898*2680e0c0SChristopher Ferris       POSTACTION(ms);
5899*2680e0c0SChristopher Ferris     }
5900*2680e0c0SChristopher Ferris   }
5901*2680e0c0SChristopher Ferris   else {
5902*2680e0c0SChristopher Ferris     USAGE_ERROR_ACTION(ms,ms);
5903*2680e0c0SChristopher Ferris   }
5904*2680e0c0SChristopher Ferris }
5905*2680e0c0SChristopher Ferris #endif /* MALLOC_INSPECT_ALL */
5906*2680e0c0SChristopher Ferris 
mspace_trim(mspace msp,size_t pad)5907*2680e0c0SChristopher Ferris int mspace_trim(mspace msp, size_t pad) {
5908*2680e0c0SChristopher Ferris   int result = 0;
5909*2680e0c0SChristopher Ferris   mstate ms = (mstate)msp;
5910*2680e0c0SChristopher Ferris   if (ok_magic(ms)) {
5911*2680e0c0SChristopher Ferris     if (!PREACTION(ms)) {
5912*2680e0c0SChristopher Ferris       result = sys_trim(ms, pad);
5913*2680e0c0SChristopher Ferris       POSTACTION(ms);
5914*2680e0c0SChristopher Ferris     }
5915*2680e0c0SChristopher Ferris   }
5916*2680e0c0SChristopher Ferris   else {
5917*2680e0c0SChristopher Ferris     USAGE_ERROR_ACTION(ms,ms);
5918*2680e0c0SChristopher Ferris   }
5919*2680e0c0SChristopher Ferris   return result;
5920*2680e0c0SChristopher Ferris }
5921*2680e0c0SChristopher Ferris 
5922*2680e0c0SChristopher Ferris #if !NO_MALLOC_STATS
mspace_malloc_stats(mspace msp)5923*2680e0c0SChristopher Ferris void mspace_malloc_stats(mspace msp) {
5924*2680e0c0SChristopher Ferris   mstate ms = (mstate)msp;
5925*2680e0c0SChristopher Ferris   if (ok_magic(ms)) {
5926*2680e0c0SChristopher Ferris     internal_malloc_stats(ms);
5927*2680e0c0SChristopher Ferris   }
5928*2680e0c0SChristopher Ferris   else {
5929*2680e0c0SChristopher Ferris     USAGE_ERROR_ACTION(ms,ms);
5930*2680e0c0SChristopher Ferris   }
5931*2680e0c0SChristopher Ferris }
5932*2680e0c0SChristopher Ferris #endif /* NO_MALLOC_STATS */
5933*2680e0c0SChristopher Ferris 
mspace_footprint(mspace msp)5934*2680e0c0SChristopher Ferris size_t mspace_footprint(mspace msp) {
5935*2680e0c0SChristopher Ferris   size_t result = 0;
5936*2680e0c0SChristopher Ferris   mstate ms = (mstate)msp;
5937*2680e0c0SChristopher Ferris   if (ok_magic(ms)) {
5938*2680e0c0SChristopher Ferris     result = ms->footprint;
5939*2680e0c0SChristopher Ferris   }
5940*2680e0c0SChristopher Ferris   else {
5941*2680e0c0SChristopher Ferris     USAGE_ERROR_ACTION(ms,ms);
5942*2680e0c0SChristopher Ferris   }
5943*2680e0c0SChristopher Ferris   return result;
5944*2680e0c0SChristopher Ferris }
5945*2680e0c0SChristopher Ferris 
mspace_max_footprint(mspace msp)5946*2680e0c0SChristopher Ferris size_t mspace_max_footprint(mspace msp) {
5947*2680e0c0SChristopher Ferris   size_t result = 0;
5948*2680e0c0SChristopher Ferris   mstate ms = (mstate)msp;
5949*2680e0c0SChristopher Ferris   if (ok_magic(ms)) {
5950*2680e0c0SChristopher Ferris     result = ms->max_footprint;
5951*2680e0c0SChristopher Ferris   }
5952*2680e0c0SChristopher Ferris   else {
5953*2680e0c0SChristopher Ferris     USAGE_ERROR_ACTION(ms,ms);
5954*2680e0c0SChristopher Ferris   }
5955*2680e0c0SChristopher Ferris   return result;
5956*2680e0c0SChristopher Ferris }
5957*2680e0c0SChristopher Ferris 
mspace_footprint_limit(mspace msp)5958*2680e0c0SChristopher Ferris size_t mspace_footprint_limit(mspace msp) {
5959*2680e0c0SChristopher Ferris   size_t result = 0;
5960*2680e0c0SChristopher Ferris   mstate ms = (mstate)msp;
5961*2680e0c0SChristopher Ferris   if (ok_magic(ms)) {
5962*2680e0c0SChristopher Ferris     size_t maf = ms->footprint_limit;
5963*2680e0c0SChristopher Ferris     result = (maf == 0) ? MAX_SIZE_T : maf;
5964*2680e0c0SChristopher Ferris   }
5965*2680e0c0SChristopher Ferris   else {
5966*2680e0c0SChristopher Ferris     USAGE_ERROR_ACTION(ms,ms);
5967*2680e0c0SChristopher Ferris   }
5968*2680e0c0SChristopher Ferris   return result;
5969*2680e0c0SChristopher Ferris }
5970*2680e0c0SChristopher Ferris 
mspace_set_footprint_limit(mspace msp,size_t bytes)5971*2680e0c0SChristopher Ferris size_t mspace_set_footprint_limit(mspace msp, size_t bytes) {
5972*2680e0c0SChristopher Ferris   size_t result = 0;
5973*2680e0c0SChristopher Ferris   mstate ms = (mstate)msp;
5974*2680e0c0SChristopher Ferris   if (ok_magic(ms)) {
5975*2680e0c0SChristopher Ferris     if (bytes == 0)
5976*2680e0c0SChristopher Ferris       result = granularity_align(1); /* Use minimal size */
5977*2680e0c0SChristopher Ferris     if (bytes == MAX_SIZE_T)
5978*2680e0c0SChristopher Ferris       result = 0;                    /* disable */
5979*2680e0c0SChristopher Ferris     else
5980*2680e0c0SChristopher Ferris       result = granularity_align(bytes);
5981*2680e0c0SChristopher Ferris     ms->footprint_limit = result;
5982*2680e0c0SChristopher Ferris   }
5983*2680e0c0SChristopher Ferris   else {
5984*2680e0c0SChristopher Ferris     USAGE_ERROR_ACTION(ms,ms);
5985*2680e0c0SChristopher Ferris   }
5986*2680e0c0SChristopher Ferris   return result;
5987*2680e0c0SChristopher Ferris }
5988*2680e0c0SChristopher Ferris 
5989*2680e0c0SChristopher Ferris #if !NO_MALLINFO
mspace_mallinfo(mspace msp)5990*2680e0c0SChristopher Ferris struct mallinfo mspace_mallinfo(mspace msp) {
5991*2680e0c0SChristopher Ferris   mstate ms = (mstate)msp;
5992*2680e0c0SChristopher Ferris   if (!ok_magic(ms)) {
5993*2680e0c0SChristopher Ferris     USAGE_ERROR_ACTION(ms,ms);
5994*2680e0c0SChristopher Ferris   }
5995*2680e0c0SChristopher Ferris   return internal_mallinfo(ms);
5996*2680e0c0SChristopher Ferris }
5997*2680e0c0SChristopher Ferris #endif /* NO_MALLINFO */
5998*2680e0c0SChristopher Ferris 
mspace_usable_size(const void * mem)5999*2680e0c0SChristopher Ferris size_t mspace_usable_size(const void* mem) {
6000*2680e0c0SChristopher Ferris   if (mem != 0) {
6001*2680e0c0SChristopher Ferris     mchunkptr p = mem2chunk(mem);
6002*2680e0c0SChristopher Ferris     if (is_inuse(p))
6003*2680e0c0SChristopher Ferris       return chunksize(p) - overhead_for(p);
6004*2680e0c0SChristopher Ferris   }
6005*2680e0c0SChristopher Ferris   return 0;
6006*2680e0c0SChristopher Ferris }
6007*2680e0c0SChristopher Ferris 
mspace_mallopt(int param_number,int value)6008*2680e0c0SChristopher Ferris int mspace_mallopt(int param_number, int value) {
6009*2680e0c0SChristopher Ferris   return change_mparam(param_number, value);
6010*2680e0c0SChristopher Ferris }
6011*2680e0c0SChristopher Ferris 
6012*2680e0c0SChristopher Ferris #endif /* MSPACES */
6013*2680e0c0SChristopher Ferris 
6014*2680e0c0SChristopher Ferris 
6015*2680e0c0SChristopher Ferris /* -------------------- Alternative MORECORE functions ------------------- */
6016*2680e0c0SChristopher Ferris 
6017*2680e0c0SChristopher Ferris /*
6018*2680e0c0SChristopher Ferris   Guidelines for creating a custom version of MORECORE:
6019*2680e0c0SChristopher Ferris 
6020*2680e0c0SChristopher Ferris   * For best performance, MORECORE should allocate in multiples of pagesize.
6021*2680e0c0SChristopher Ferris   * MORECORE may allocate more memory than requested. (Or even less,
6022*2680e0c0SChristopher Ferris       but this will usually result in a malloc failure.)
6023*2680e0c0SChristopher Ferris   * MORECORE must not allocate memory when given argument zero, but
6024*2680e0c0SChristopher Ferris       instead return one past the end address of memory from previous
6025*2680e0c0SChristopher Ferris       nonzero call.
6026*2680e0c0SChristopher Ferris   * For best performance, consecutive calls to MORECORE with positive
6027*2680e0c0SChristopher Ferris       arguments should return increasing addresses, indicating that
6028*2680e0c0SChristopher Ferris       space has been contiguously extended.
6029*2680e0c0SChristopher Ferris   * Even though consecutive calls to MORECORE need not return contiguous
6030*2680e0c0SChristopher Ferris       addresses, it must be OK for malloc'ed chunks to span multiple
6031*2680e0c0SChristopher Ferris       regions in those cases where they do happen to be contiguous.
6032*2680e0c0SChristopher Ferris   * MORECORE need not handle negative arguments -- it may instead
6033*2680e0c0SChristopher Ferris       just return MFAIL when given negative arguments.
6034*2680e0c0SChristopher Ferris       Negative arguments are always multiples of pagesize. MORECORE
6035*2680e0c0SChristopher Ferris       must not misinterpret negative args as large positive unsigned
6036*2680e0c0SChristopher Ferris       args. You can suppress all such calls from even occurring by defining
6037*2680e0c0SChristopher Ferris       MORECORE_CANNOT_TRIM,
6038*2680e0c0SChristopher Ferris 
6039*2680e0c0SChristopher Ferris   As an example alternative MORECORE, here is a custom allocator
6040*2680e0c0SChristopher Ferris   kindly contributed for pre-OSX macOS.  It uses virtually but not
6041*2680e0c0SChristopher Ferris   necessarily physically contiguous non-paged memory (locked in,
6042*2680e0c0SChristopher Ferris   present and won't get swapped out).  You can use it by uncommenting
6043*2680e0c0SChristopher Ferris   this section, adding some #includes, and setting up the appropriate
6044*2680e0c0SChristopher Ferris   defines above:
6045*2680e0c0SChristopher Ferris 
6046*2680e0c0SChristopher Ferris       #define MORECORE osMoreCore
6047*2680e0c0SChristopher Ferris 
6048*2680e0c0SChristopher Ferris   There is also a shutdown routine that should somehow be called for
6049*2680e0c0SChristopher Ferris   cleanup upon program exit.
6050*2680e0c0SChristopher Ferris 
6051*2680e0c0SChristopher Ferris   #define MAX_POOL_ENTRIES 100
6052*2680e0c0SChristopher Ferris   #define MINIMUM_MORECORE_SIZE  (64 * 1024U)
6053*2680e0c0SChristopher Ferris   static int next_os_pool;
6054*2680e0c0SChristopher Ferris   void *our_os_pools[MAX_POOL_ENTRIES];
6055*2680e0c0SChristopher Ferris 
6056*2680e0c0SChristopher Ferris   void *osMoreCore(int size)
6057*2680e0c0SChristopher Ferris   {
6058*2680e0c0SChristopher Ferris     void *ptr = 0;
6059*2680e0c0SChristopher Ferris     static void *sbrk_top = 0;
6060*2680e0c0SChristopher Ferris 
6061*2680e0c0SChristopher Ferris     if (size > 0)
6062*2680e0c0SChristopher Ferris     {
6063*2680e0c0SChristopher Ferris       if (size < MINIMUM_MORECORE_SIZE)
6064*2680e0c0SChristopher Ferris          size = MINIMUM_MORECORE_SIZE;
6065*2680e0c0SChristopher Ferris       if (CurrentExecutionLevel() == kTaskLevel)
6066*2680e0c0SChristopher Ferris          ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
6067*2680e0c0SChristopher Ferris       if (ptr == 0)
6068*2680e0c0SChristopher Ferris       {
6069*2680e0c0SChristopher Ferris         return (void *) MFAIL;
6070*2680e0c0SChristopher Ferris       }
6071*2680e0c0SChristopher Ferris       // save ptrs so they can be freed during cleanup
6072*2680e0c0SChristopher Ferris       our_os_pools[next_os_pool] = ptr;
6073*2680e0c0SChristopher Ferris       next_os_pool++;
6074*2680e0c0SChristopher Ferris       ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
6075*2680e0c0SChristopher Ferris       sbrk_top = (char *) ptr + size;
6076*2680e0c0SChristopher Ferris       return ptr;
6077*2680e0c0SChristopher Ferris     }
6078*2680e0c0SChristopher Ferris     else if (size < 0)
6079*2680e0c0SChristopher Ferris     {
6080*2680e0c0SChristopher Ferris       // we don't currently support shrink behavior
6081*2680e0c0SChristopher Ferris       return (void *) MFAIL;
6082*2680e0c0SChristopher Ferris     }
6083*2680e0c0SChristopher Ferris     else
6084*2680e0c0SChristopher Ferris     {
6085*2680e0c0SChristopher Ferris       return sbrk_top;
6086*2680e0c0SChristopher Ferris     }
6087*2680e0c0SChristopher Ferris   }
6088*2680e0c0SChristopher Ferris 
6089*2680e0c0SChristopher Ferris   // cleanup any allocated memory pools
6090*2680e0c0SChristopher Ferris   // called as last thing before shutting down driver
6091*2680e0c0SChristopher Ferris 
6092*2680e0c0SChristopher Ferris   void osCleanupMem(void)
6093*2680e0c0SChristopher Ferris   {
6094*2680e0c0SChristopher Ferris     void **ptr;
6095*2680e0c0SChristopher Ferris 
6096*2680e0c0SChristopher Ferris     for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
6097*2680e0c0SChristopher Ferris       if (*ptr)
6098*2680e0c0SChristopher Ferris       {
6099*2680e0c0SChristopher Ferris          PoolDeallocate(*ptr);
6100*2680e0c0SChristopher Ferris          *ptr = 0;
6101*2680e0c0SChristopher Ferris       }
6102*2680e0c0SChristopher Ferris   }
6103*2680e0c0SChristopher Ferris 
6104*2680e0c0SChristopher Ferris */
6105*2680e0c0SChristopher Ferris 
6106*2680e0c0SChristopher Ferris 
6107*2680e0c0SChristopher Ferris /* -----------------------------------------------------------------------
6108*2680e0c0SChristopher Ferris History:
6109*2680e0c0SChristopher Ferris     v2.8.6 Wed Aug 29 06:57:58 2012  Doug Lea
6110*2680e0c0SChristopher Ferris       * fix bad comparison in dlposix_memalign
6111*2680e0c0SChristopher Ferris       * don't reuse adjusted asize in sys_alloc
6112*2680e0c0SChristopher Ferris       * add LOCK_AT_FORK -- thanks to Kirill Artamonov for the suggestion
6113*2680e0c0SChristopher Ferris       * reduce compiler warnings -- thanks to all who reported/suggested these
6114*2680e0c0SChristopher Ferris 
6115*2680e0c0SChristopher Ferris     v2.8.5 Sun May 22 10:26:02 2011  Doug Lea  (dl at gee)
6116*2680e0c0SChristopher Ferris       * Always perform unlink checks unless INSECURE
6117*2680e0c0SChristopher Ferris       * Add posix_memalign.
6118*2680e0c0SChristopher Ferris       * Improve realloc to expand in more cases; expose realloc_in_place.
6119*2680e0c0SChristopher Ferris         Thanks to Peter Buhr for the suggestion.
6120*2680e0c0SChristopher Ferris       * Add footprint_limit, inspect_all, bulk_free. Thanks
6121*2680e0c0SChristopher Ferris         to Barry Hayes and others for the suggestions.
6122*2680e0c0SChristopher Ferris       * Internal refactorings to avoid calls while holding locks
6123*2680e0c0SChristopher Ferris       * Use non-reentrant locks by default. Thanks to Roland McGrath
6124*2680e0c0SChristopher Ferris         for the suggestion.
6125*2680e0c0SChristopher Ferris       * Small fixes to mspace_destroy, reset_on_error.
6126*2680e0c0SChristopher Ferris       * Various configuration extensions/changes. Thanks
6127*2680e0c0SChristopher Ferris          to all who contributed these.
6128*2680e0c0SChristopher Ferris 
6129*2680e0c0SChristopher Ferris     V2.8.4a Thu Apr 28 14:39:43 2011 (dl at gee.cs.oswego.edu)
6130*2680e0c0SChristopher Ferris       * Update Creative Commons URL
6131*2680e0c0SChristopher Ferris 
6132*2680e0c0SChristopher Ferris     V2.8.4 Wed May 27 09:56:23 2009  Doug Lea  (dl at gee)
6133*2680e0c0SChristopher Ferris       * Use zeros instead of prev foot for is_mmapped
6134*2680e0c0SChristopher Ferris       * Add mspace_track_large_chunks; thanks to Jean Brouwers
6135*2680e0c0SChristopher Ferris       * Fix set_inuse in internal_realloc; thanks to Jean Brouwers
6136*2680e0c0SChristopher Ferris       * Fix insufficient sys_alloc padding when using 16byte alignment
6137*2680e0c0SChristopher Ferris       * Fix bad error check in mspace_footprint
6138*2680e0c0SChristopher Ferris       * Adaptations for ptmalloc; thanks to Wolfram Gloger.
6139*2680e0c0SChristopher Ferris       * Reentrant spin locks; thanks to Earl Chew and others
6140*2680e0c0SChristopher Ferris       * Win32 improvements; thanks to Niall Douglas and Earl Chew
6141*2680e0c0SChristopher Ferris       * Add NO_SEGMENT_TRAVERSAL and MAX_RELEASE_CHECK_RATE options
6142*2680e0c0SChristopher Ferris       * Extension hook in malloc_state
6143*2680e0c0SChristopher Ferris       * Various small adjustments to reduce warnings on some compilers
6144*2680e0c0SChristopher Ferris       * Various configuration extensions/changes for more platforms. Thanks
6145*2680e0c0SChristopher Ferris          to all who contributed these.
6146*2680e0c0SChristopher Ferris 
6147*2680e0c0SChristopher Ferris     V2.8.3 Thu Sep 22 11:16:32 2005  Doug Lea  (dl at gee)
6148*2680e0c0SChristopher Ferris       * Add max_footprint functions
6149*2680e0c0SChristopher Ferris       * Ensure all appropriate literals are size_t
6150*2680e0c0SChristopher Ferris       * Fix conditional compilation problem for some #define settings
6151*2680e0c0SChristopher Ferris       * Avoid concatenating segments with the one provided
6152*2680e0c0SChristopher Ferris         in create_mspace_with_base
6153*2680e0c0SChristopher Ferris       * Rename some variables to avoid compiler shadowing warnings
6154*2680e0c0SChristopher Ferris       * Use explicit lock initialization.
6155*2680e0c0SChristopher Ferris       * Better handling of sbrk interference.
6156*2680e0c0SChristopher Ferris       * Simplify and fix segment insertion, trimming and mspace_destroy
6157*2680e0c0SChristopher Ferris       * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
6158*2680e0c0SChristopher Ferris       * Thanks especially to Dennis Flanagan for help on these.
6159*2680e0c0SChristopher Ferris 
6160*2680e0c0SChristopher Ferris     V2.8.2 Sun Jun 12 16:01:10 2005  Doug Lea  (dl at gee)
6161*2680e0c0SChristopher Ferris       * Fix memalign brace error.
6162*2680e0c0SChristopher Ferris 
6163*2680e0c0SChristopher Ferris     V2.8.1 Wed Jun  8 16:11:46 2005  Doug Lea  (dl at gee)
6164*2680e0c0SChristopher Ferris       * Fix improper #endif nesting in C++
6165*2680e0c0SChristopher Ferris       * Add explicit casts needed for C++
6166*2680e0c0SChristopher Ferris 
6167*2680e0c0SChristopher Ferris     V2.8.0 Mon May 30 14:09:02 2005  Doug Lea  (dl at gee)
6168*2680e0c0SChristopher Ferris       * Use trees for large bins
6169*2680e0c0SChristopher Ferris       * Support mspaces
6170*2680e0c0SChristopher Ferris       * Use segments to unify sbrk-based and mmap-based system allocation,
6171*2680e0c0SChristopher Ferris         removing need for emulation on most platforms without sbrk.
6172*2680e0c0SChristopher Ferris       * Default safety checks
6173*2680e0c0SChristopher Ferris       * Optional footer checks. Thanks to William Robertson for the idea.
6174*2680e0c0SChristopher Ferris       * Internal code refactoring
6175*2680e0c0SChristopher Ferris       * Incorporate suggestions and platform-specific changes.
6176*2680e0c0SChristopher Ferris         Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
6177*2680e0c0SChristopher Ferris         Aaron Bachmann,  Emery Berger, and others.
6178*2680e0c0SChristopher Ferris       * Speed up non-fastbin processing enough to remove fastbins.
6179*2680e0c0SChristopher Ferris       * Remove useless cfree() to avoid conflicts with other apps.
6180*2680e0c0SChristopher Ferris       * Remove internal memcpy, memset. Compilers handle builtins better.
6181*2680e0c0SChristopher Ferris       * Remove some options that no one ever used and rename others.
6182*2680e0c0SChristopher Ferris 
6183*2680e0c0SChristopher Ferris     V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
6184*2680e0c0SChristopher Ferris       * Fix malloc_state bitmap array misdeclaration
6185*2680e0c0SChristopher Ferris 
6186*2680e0c0SChristopher Ferris     V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
6187*2680e0c0SChristopher Ferris       * Allow tuning of FIRST_SORTED_BIN_SIZE
6188*2680e0c0SChristopher Ferris       * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
6189*2680e0c0SChristopher Ferris       * Better detection and support for non-contiguousness of MORECORE.
6190*2680e0c0SChristopher Ferris         Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
6191*2680e0c0SChristopher Ferris       * Bypass most of malloc if no frees. Thanks To Emery Berger.
6192*2680e0c0SChristopher Ferris       * Fix freeing of old top non-contiguous chunk im sysmalloc.
6193*2680e0c0SChristopher Ferris       * Raised default trim and map thresholds to 256K.
6194*2680e0c0SChristopher Ferris       * Fix mmap-related #defines. Thanks to Lubos Lunak.
6195*2680e0c0SChristopher Ferris       * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
6196*2680e0c0SChristopher Ferris       * Branch-free bin calculation
6197*2680e0c0SChristopher Ferris       * Default trim and mmap thresholds now 256K.
6198*2680e0c0SChristopher Ferris 
6199*2680e0c0SChristopher Ferris     V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
6200*2680e0c0SChristopher Ferris       * Introduce independent_comalloc and independent_calloc.
6201*2680e0c0SChristopher Ferris         Thanks to Michael Pachos for motivation and help.
6202*2680e0c0SChristopher Ferris       * Make optional .h file available
6203*2680e0c0SChristopher Ferris       * Allow > 2GB requests on 32bit systems.
6204*2680e0c0SChristopher Ferris       * new WIN32 sbrk, mmap, munmap, lock code from <[email protected]>.
6205*2680e0c0SChristopher Ferris         Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
6206*2680e0c0SChristopher Ferris         and Anonymous.
6207*2680e0c0SChristopher Ferris       * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
6208*2680e0c0SChristopher Ferris         helping test this.)
6209*2680e0c0SChristopher Ferris       * memalign: check alignment arg
6210*2680e0c0SChristopher Ferris       * realloc: don't try to shift chunks backwards, since this
6211*2680e0c0SChristopher Ferris         leads to  more fragmentation in some programs and doesn't
6212*2680e0c0SChristopher Ferris         seem to help in any others.
6213*2680e0c0SChristopher Ferris       * Collect all cases in malloc requiring system memory into sysmalloc
6214*2680e0c0SChristopher Ferris       * Use mmap as backup to sbrk
6215*2680e0c0SChristopher Ferris       * Place all internal state in malloc_state
6216*2680e0c0SChristopher Ferris       * Introduce fastbins (although similar to 2.5.1)
6217*2680e0c0SChristopher Ferris       * Many minor tunings and cosmetic improvements
6218*2680e0c0SChristopher Ferris       * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
6219*2680e0c0SChristopher Ferris       * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
6220*2680e0c0SChristopher Ferris         Thanks to Tony E. Bennett <[email protected]> and others.
6221*2680e0c0SChristopher Ferris       * Include errno.h to support default failure action.
6222*2680e0c0SChristopher Ferris 
6223*2680e0c0SChristopher Ferris     V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
6224*2680e0c0SChristopher Ferris       * return null for negative arguments
6225*2680e0c0SChristopher Ferris       * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
6226*2680e0c0SChristopher Ferris          * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
6227*2680e0c0SChristopher Ferris           (e.g. WIN32 platforms)
6228*2680e0c0SChristopher Ferris          * Cleanup header file inclusion for WIN32 platforms
6229*2680e0c0SChristopher Ferris          * Cleanup code to avoid Microsoft Visual C++ compiler complaints
6230*2680e0c0SChristopher Ferris          * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
6231*2680e0c0SChristopher Ferris            memory allocation routines
6232*2680e0c0SChristopher Ferris          * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
6233*2680e0c0SChristopher Ferris          * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
6234*2680e0c0SChristopher Ferris            usage of 'assert' in non-WIN32 code
6235*2680e0c0SChristopher Ferris          * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
6236*2680e0c0SChristopher Ferris            avoid infinite loop
6237*2680e0c0SChristopher Ferris       * Always call 'fREe()' rather than 'free()'
6238*2680e0c0SChristopher Ferris 
6239*2680e0c0SChristopher Ferris     V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
6240*2680e0c0SChristopher Ferris       * Fixed ordering problem with boundary-stamping
6241*2680e0c0SChristopher Ferris 
6242*2680e0c0SChristopher Ferris     V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
6243*2680e0c0SChristopher Ferris       * Added pvalloc, as recommended by H.J. Liu
6244*2680e0c0SChristopher Ferris       * Added 64bit pointer support mainly from Wolfram Gloger
6245*2680e0c0SChristopher Ferris       * Added anonymously donated WIN32 sbrk emulation
6246*2680e0c0SChristopher Ferris       * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
6247*2680e0c0SChristopher Ferris       * malloc_extend_top: fix mask error that caused wastage after
6248*2680e0c0SChristopher Ferris         foreign sbrks
6249*2680e0c0SChristopher Ferris       * Add linux mremap support code from HJ Liu
6250*2680e0c0SChristopher Ferris 
6251*2680e0c0SChristopher Ferris     V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
6252*2680e0c0SChristopher Ferris       * Integrated most documentation with the code.
6253*2680e0c0SChristopher Ferris       * Add support for mmap, with help from
6254*2680e0c0SChristopher Ferris         Wolfram Gloger ([email protected]).
6255*2680e0c0SChristopher Ferris       * Use last_remainder in more cases.
6256*2680e0c0SChristopher Ferris       * Pack bins using idea from  [email protected]
6257*2680e0c0SChristopher Ferris       * Use ordered bins instead of best-fit threshhold
6258*2680e0c0SChristopher Ferris       * Eliminate block-local decls to simplify tracing and debugging.
6259*2680e0c0SChristopher Ferris       * Support another case of realloc via move into top
6260*2680e0c0SChristopher Ferris       * Fix error occuring when initial sbrk_base not word-aligned.
6261*2680e0c0SChristopher Ferris       * Rely on page size for units instead of SBRK_UNIT to
6262*2680e0c0SChristopher Ferris         avoid surprises about sbrk alignment conventions.
6263*2680e0c0SChristopher Ferris       * Add mallinfo, mallopt. Thanks to Raymond Nijssen
6264*2680e0c0SChristopher Ferris         ([email protected]) for the suggestion.
6265*2680e0c0SChristopher Ferris       * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
6266*2680e0c0SChristopher Ferris       * More precautions for cases where other routines call sbrk,
6267*2680e0c0SChristopher Ferris         courtesy of Wolfram Gloger ([email protected]).
6268*2680e0c0SChristopher Ferris       * Added macros etc., allowing use in linux libc from
6269*2680e0c0SChristopher Ferris         H.J. Lu ([email protected])
6270*2680e0c0SChristopher Ferris       * Inverted this history list
6271*2680e0c0SChristopher Ferris 
6272*2680e0c0SChristopher Ferris     V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
6273*2680e0c0SChristopher Ferris       * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
6274*2680e0c0SChristopher Ferris       * Removed all preallocation code since under current scheme
6275*2680e0c0SChristopher Ferris         the work required to undo bad preallocations exceeds
6276*2680e0c0SChristopher Ferris         the work saved in good cases for most test programs.
6277*2680e0c0SChristopher Ferris       * No longer use return list or unconsolidated bins since
6278*2680e0c0SChristopher Ferris         no scheme using them consistently outperforms those that don't
6279*2680e0c0SChristopher Ferris         given above changes.
6280*2680e0c0SChristopher Ferris       * Use best fit for very large chunks to prevent some worst-cases.
6281*2680e0c0SChristopher Ferris       * Added some support for debugging
6282*2680e0c0SChristopher Ferris 
6283*2680e0c0SChristopher Ferris     V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
6284*2680e0c0SChristopher Ferris       * Removed footers when chunks are in use. Thanks to
6285*2680e0c0SChristopher Ferris         Paul Wilson ([email protected]) for the suggestion.
6286*2680e0c0SChristopher Ferris 
6287*2680e0c0SChristopher Ferris     V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
6288*2680e0c0SChristopher Ferris       * Added malloc_trim, with help from Wolfram Gloger
6289*2680e0c0SChristopher Ferris         ([email protected]).
6290*2680e0c0SChristopher Ferris 
6291*2680e0c0SChristopher Ferris     V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
6292*2680e0c0SChristopher Ferris 
6293*2680e0c0SChristopher Ferris     V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
6294*2680e0c0SChristopher Ferris       * realloc: try to expand in both directions
6295*2680e0c0SChristopher Ferris       * malloc: swap order of clean-bin strategy;
6296*2680e0c0SChristopher Ferris       * realloc: only conditionally expand backwards
6297*2680e0c0SChristopher Ferris       * Try not to scavenge used bins
6298*2680e0c0SChristopher Ferris       * Use bin counts as a guide to preallocation
6299*2680e0c0SChristopher Ferris       * Occasionally bin return list chunks in first scan
6300*2680e0c0SChristopher Ferris       * Add a few optimizations from [email protected]
6301*2680e0c0SChristopher Ferris 
6302*2680e0c0SChristopher Ferris     V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
6303*2680e0c0SChristopher Ferris       * faster bin computation & slightly different binning
6304*2680e0c0SChristopher Ferris       * merged all consolidations to one part of malloc proper
6305*2680e0c0SChristopher Ferris          (eliminating old malloc_find_space & malloc_clean_bin)
6306*2680e0c0SChristopher Ferris       * Scan 2 returns chunks (not just 1)
6307*2680e0c0SChristopher Ferris       * Propagate failure in realloc if malloc returns 0
6308*2680e0c0SChristopher Ferris       * Add stuff to allow compilation on non-ANSI compilers
6309*2680e0c0SChristopher Ferris           from [email protected]
6310*2680e0c0SChristopher Ferris 
6311*2680e0c0SChristopher Ferris     V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
6312*2680e0c0SChristopher Ferris       * removed potential for odd address access in prev_chunk
6313*2680e0c0SChristopher Ferris       * removed dependency on getpagesize.h
6314*2680e0c0SChristopher Ferris       * misc cosmetics and a bit more internal documentation
6315*2680e0c0SChristopher Ferris       * anticosmetics: mangled names in macros to evade debugger strangeness
6316*2680e0c0SChristopher Ferris       * tested on sparc, hp-700, dec-mips, rs6000
6317*2680e0c0SChristopher Ferris           with gcc & native cc (hp, dec only) allowing
6318*2680e0c0SChristopher Ferris           Detlefs & Zorn comparison study (in SIGPLAN Notices.)
6319*2680e0c0SChristopher Ferris 
6320*2680e0c0SChristopher Ferris     Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
6321*2680e0c0SChristopher Ferris       * Based loosely on libg++-1.2X malloc. (It retains some of the overall
6322*2680e0c0SChristopher Ferris          structure of old version,  but most details differ.)
6323*2680e0c0SChristopher Ferris 
6324*2680e0c0SChristopher Ferris */
6325