xref: /utopia/UTPA2-700.0.x/modules/msos/msos/ucos/dlmalloc.c (revision 53ee8cc121a030b8d368113ac3e966b4705770ef)
1*53ee8cc1Swenshuai.xi 
2*53ee8cc1Swenshuai.xi #include "stdlib.h"
3*53ee8cc1Swenshuai.xi #include "dlmalloc.h"
4*53ee8cc1Swenshuai.xi 
5*53ee8cc1Swenshuai.xi #define INSECURE            1
6*53ee8cc1Swenshuai.xi #define USE_DL_PREFIX
7*53ee8cc1Swenshuai.xi //#define MORECORE MoreCore
8*53ee8cc1Swenshuai.xi #define MORECORE_CANNOT_TRIM
9*53ee8cc1Swenshuai.xi 
10*53ee8cc1Swenshuai.xi #define FOOTERS             0
11*53ee8cc1Swenshuai.xi #define MMAP_CLEARS         1
12*53ee8cc1Swenshuai.xi #define HAVE_MMAP           0
13*53ee8cc1Swenshuai.xi //#define HAVE_USR_INCLUDE_MALLOC_H 1
14*53ee8cc1Swenshuai.xi 
15*53ee8cc1Swenshuai.xi #define MALLOC_ALIGNMENT    16
16*53ee8cc1Swenshuai.xi 
17*53ee8cc1Swenshuai.xi /*
18*53ee8cc1Swenshuai.xi   This is a version (aka dlmalloc) of malloc/free/realloc written by
19*53ee8cc1Swenshuai.xi   Doug Lea and released to the public domain, as explained at
20*53ee8cc1Swenshuai.xi   http://creativecommons.org/licenses/publicdomain.  Send questions,
21*53ee8cc1Swenshuai.xi   comments, complaints, performance data, etc to dl@cs.oswego.edu
22*53ee8cc1Swenshuai.xi 
23*53ee8cc1Swenshuai.xi * Version 2.8.3 Thu Sep 22 11:16:15 2005  Doug Lea  (dl at gee)
24*53ee8cc1Swenshuai.xi 
25*53ee8cc1Swenshuai.xi    Note: There may be an updated version of this malloc obtainable at
26*53ee8cc1Swenshuai.xi            ftp://gee.cs.oswego.edu/pub/misc/malloc.c
27*53ee8cc1Swenshuai.xi          Check before installing!
28*53ee8cc1Swenshuai.xi 
29*53ee8cc1Swenshuai.xi * Quickstart
30*53ee8cc1Swenshuai.xi 
31*53ee8cc1Swenshuai.xi   This library is all in one file to simplify the most common usage:
32*53ee8cc1Swenshuai.xi   ftp it, compile it (-O3), and link it into another program. All of
33*53ee8cc1Swenshuai.xi   the compile-time options default to reasonable values for use on
34*53ee8cc1Swenshuai.xi   most platforms.  You might later want to step through various
35*53ee8cc1Swenshuai.xi   compile-time and dynamic tuning options.
36*53ee8cc1Swenshuai.xi 
37*53ee8cc1Swenshuai.xi   For convenience, an include file for code using this malloc is at:
38*53ee8cc1Swenshuai.xi      ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
39*53ee8cc1Swenshuai.xi   You don't really need this .h file unless you call functions not
40*53ee8cc1Swenshuai.xi   defined in your system include files.  The .h file contains only the
41*53ee8cc1Swenshuai.xi   excerpts from this file needed for using this malloc on ANSI C/C++
42*53ee8cc1Swenshuai.xi   systems, so long as you haven't changed compile-time options about
43*53ee8cc1Swenshuai.xi   naming and tuning parameters.  If you do, then you can create your
44*53ee8cc1Swenshuai.xi   own malloc.h that does include all settings by cutting at the point
45*53ee8cc1Swenshuai.xi   indicated below. Note that you may already by default be using a C
46*53ee8cc1Swenshuai.xi   library containing a malloc that is based on some version of this
47*53ee8cc1Swenshuai.xi   malloc (for example in linux). You might still want to use the one
48*53ee8cc1Swenshuai.xi   in this file to customize settings or to avoid overheads associated
49*53ee8cc1Swenshuai.xi   with library versions.
50*53ee8cc1Swenshuai.xi 
51*53ee8cc1Swenshuai.xi * Vital statistics:
52*53ee8cc1Swenshuai.xi 
53*53ee8cc1Swenshuai.xi   Supported pointer/size_t representation:       4 or 8 bytes
54*53ee8cc1Swenshuai.xi        size_t MUST be an unsigned type of the same width as
55*53ee8cc1Swenshuai.xi        pointers. (If you are using an ancient system that declares
56*53ee8cc1Swenshuai.xi        size_t as a signed type, or need it to be a different width
57*53ee8cc1Swenshuai.xi        than pointers, you can use a previous release of this malloc
58*53ee8cc1Swenshuai.xi        (e.g. 2.7.2) supporting these.)
59*53ee8cc1Swenshuai.xi 
60*53ee8cc1Swenshuai.xi   Alignment:                                     8 bytes (default)
61*53ee8cc1Swenshuai.xi        This suffices for nearly all current machines and C compilers.
62*53ee8cc1Swenshuai.xi        However, you can define MALLOC_ALIGNMENT to be wider than this
63*53ee8cc1Swenshuai.xi        if necessary (up to 128bytes), at the expense of using more space.
64*53ee8cc1Swenshuai.xi 
65*53ee8cc1Swenshuai.xi   Minimum overhead per allocated chunk:   4 or  8 bytes (if 4byte sizes)
66*53ee8cc1Swenshuai.xi                                           8 or 16 bytes (if 8byte sizes)
67*53ee8cc1Swenshuai.xi        Each malloced chunk has a hidden word of overhead holding size
68*53ee8cc1Swenshuai.xi        and status information, and additional cross-check word
69*53ee8cc1Swenshuai.xi        if FOOTERS is defined.
70*53ee8cc1Swenshuai.xi 
71*53ee8cc1Swenshuai.xi   Minimum allocated size: 4-byte ptrs:  16 bytes    (including overhead)
72*53ee8cc1Swenshuai.xi                           8-byte ptrs:  32 bytes    (including overhead)
73*53ee8cc1Swenshuai.xi 
74*53ee8cc1Swenshuai.xi        Even a request for zero bytes (i.e., malloc(0)) returns a
75*53ee8cc1Swenshuai.xi        pointer to something of the minimum allocatable size.
76*53ee8cc1Swenshuai.xi        The maximum overhead wastage (i.e., number of extra bytes
77*53ee8cc1Swenshuai.xi        allocated than were requested in malloc) is less than or equal
78*53ee8cc1Swenshuai.xi        to the minimum size, except for requests >= mmap_threshold that
79*53ee8cc1Swenshuai.xi        are serviced via mmap(), where the worst case wastage is about
80*53ee8cc1Swenshuai.xi        32 bytes plus the remainder from a system page (the minimal
81*53ee8cc1Swenshuai.xi        mmap unit); typically 4096 or 8192 bytes.
82*53ee8cc1Swenshuai.xi 
83*53ee8cc1Swenshuai.xi   Security: static-safe; optionally more or less
84*53ee8cc1Swenshuai.xi        The "security" of malloc refers to the ability of malicious
85*53ee8cc1Swenshuai.xi        code to accentuate the effects of errors (for example, freeing
86*53ee8cc1Swenshuai.xi        space that is not currently malloc'ed or overwriting past the
87*53ee8cc1Swenshuai.xi        ends of chunks) in code that calls malloc.  This malloc
88*53ee8cc1Swenshuai.xi        guarantees not to modify any memory locations below the base of
89*53ee8cc1Swenshuai.xi        heap, i.e., static variables, even in the presence of usage
90*53ee8cc1Swenshuai.xi        errors.  The routines additionally detect most improper frees
91*53ee8cc1Swenshuai.xi        and reallocs.  All this holds as long as the static bookkeeping
92*53ee8cc1Swenshuai.xi        for malloc itself is not corrupted by some other means.  This
93*53ee8cc1Swenshuai.xi        is only one aspect of security -- these checks do not, and
94*53ee8cc1Swenshuai.xi        cannot, detect all possible programming errors.
95*53ee8cc1Swenshuai.xi 
96*53ee8cc1Swenshuai.xi        If FOOTERS is defined nonzero, then each allocated chunk
97*53ee8cc1Swenshuai.xi        carries an additional check word to verify that it was malloced
98*53ee8cc1Swenshuai.xi        from its space.  These check words are the same within each
99*53ee8cc1Swenshuai.xi        execution of a program using malloc, but differ across
100*53ee8cc1Swenshuai.xi        executions, so externally crafted fake chunks cannot be
101*53ee8cc1Swenshuai.xi        freed. This improves security by rejecting frees/reallocs that
102*53ee8cc1Swenshuai.xi        could corrupt heap memory, in addition to the checks preventing
103*53ee8cc1Swenshuai.xi        writes to statics that are always on.  This may further improve
104*53ee8cc1Swenshuai.xi        security at the expense of time and space overhead.  (Note that
105*53ee8cc1Swenshuai.xi        FOOTERS may also be worth using with MSPACES.)
106*53ee8cc1Swenshuai.xi 
107*53ee8cc1Swenshuai.xi        By default detected errors cause the program to abort (calling
108*53ee8cc1Swenshuai.xi        "abort()"). You can override this to instead proceed past
109*53ee8cc1Swenshuai.xi        errors by defining PROCEED_ON_ERROR.  In this case, a bad free
110*53ee8cc1Swenshuai.xi        has no effect, and a malloc that encounters a bad address
111*53ee8cc1Swenshuai.xi        caused by user overwrites will ignore the bad address by
112*53ee8cc1Swenshuai.xi        dropping pointers and indices to all known memory. This may
113*53ee8cc1Swenshuai.xi        be appropriate for programs that should continue if at all
114*53ee8cc1Swenshuai.xi        possible in the face of programming errors, although they may
115*53ee8cc1Swenshuai.xi        run out of memory because dropped memory is never reclaimed.
116*53ee8cc1Swenshuai.xi 
117*53ee8cc1Swenshuai.xi        If you don't like either of these options, you can define
118*53ee8cc1Swenshuai.xi        CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
119*53ee8cc1Swenshuai.xi        else. And if if you are sure that your program using malloc has
120*53ee8cc1Swenshuai.xi        no errors or vulnerabilities, you can define INSECURE to 1,
121*53ee8cc1Swenshuai.xi        which might (or might not) provide a small performance improvement.
122*53ee8cc1Swenshuai.xi 
123*53ee8cc1Swenshuai.xi   Thread-safety: NOT thread-safe unless USE_LOCKS defined
124*53ee8cc1Swenshuai.xi        When USE_LOCKS is defined, each public call to malloc, free,
125*53ee8cc1Swenshuai.xi        etc is surrounded with either a pthread mutex or a win32
126*53ee8cc1Swenshuai.xi        spinlock (depending on WIN32). This is not especially fast, and
127*53ee8cc1Swenshuai.xi        can be a major bottleneck.  It is designed only to provide
128*53ee8cc1Swenshuai.xi        minimal protection in concurrent environments, and to provide a
129*53ee8cc1Swenshuai.xi        basis for extensions.  If you are using malloc in a concurrent
130*53ee8cc1Swenshuai.xi        program, consider instead using ptmalloc, which is derived from
131*53ee8cc1Swenshuai.xi        a version of this malloc. (See http://www.malloc.de).
132*53ee8cc1Swenshuai.xi 
133*53ee8cc1Swenshuai.xi   System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
134*53ee8cc1Swenshuai.xi        This malloc can use unix sbrk or any emulation (invoked using
135*53ee8cc1Swenshuai.xi        the CALL_MORECORE macro) and/or mmap/munmap or any emulation
136*53ee8cc1Swenshuai.xi        (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
137*53ee8cc1Swenshuai.xi        memory.  On most unix systems, it tends to work best if both
138*53ee8cc1Swenshuai.xi        MORECORE and MMAP are enabled.  On Win32, it uses emulations
139*53ee8cc1Swenshuai.xi        based on VirtualAlloc. It also uses common C library functions
140*53ee8cc1Swenshuai.xi        like memset.
141*53ee8cc1Swenshuai.xi 
142*53ee8cc1Swenshuai.xi   Compliance: I believe it is compliant with the Single Unix Specification
143*53ee8cc1Swenshuai.xi        (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
144*53ee8cc1Swenshuai.xi        others as well.
145*53ee8cc1Swenshuai.xi 
146*53ee8cc1Swenshuai.xi * Overview of algorithms
147*53ee8cc1Swenshuai.xi 
148*53ee8cc1Swenshuai.xi   This is not the fastest, most space-conserving, most portable, or
149*53ee8cc1Swenshuai.xi   most tunable malloc ever written. However it is among the fastest
150*53ee8cc1Swenshuai.xi   while also being among the most space-conserving, portable and
151*53ee8cc1Swenshuai.xi   tunable.  Consistent balance across these factors results in a good
152*53ee8cc1Swenshuai.xi   general-purpose allocator for malloc-intensive programs.
153*53ee8cc1Swenshuai.xi 
154*53ee8cc1Swenshuai.xi   In most ways, this malloc is a best-fit allocator. Generally, it
155*53ee8cc1Swenshuai.xi   chooses the best-fitting existing chunk for a request, with ties
156*53ee8cc1Swenshuai.xi   broken in approximately least-recently-used order. (This strategy
157*53ee8cc1Swenshuai.xi   normally maintains low fragmentation.) However, for requests less
158*53ee8cc1Swenshuai.xi   than 256bytes, it deviates from best-fit when there is not an
159*53ee8cc1Swenshuai.xi   exactly fitting available chunk by preferring to use space adjacent
160*53ee8cc1Swenshuai.xi   to that used for the previous small request, as well as by breaking
161*53ee8cc1Swenshuai.xi   ties in approximately most-recently-used order. (These enhance
162*53ee8cc1Swenshuai.xi   locality of series of small allocations.)  And for very large requests
163*53ee8cc1Swenshuai.xi   (>= 256Kb by default), it relies on system memory mapping
164*53ee8cc1Swenshuai.xi   facilities, if supported.  (This helps avoid carrying around and
165*53ee8cc1Swenshuai.xi   possibly fragmenting memory used only for large chunks.)
166*53ee8cc1Swenshuai.xi 
167*53ee8cc1Swenshuai.xi   All operations (except malloc_stats and mallinfo) have execution
168*53ee8cc1Swenshuai.xi   times that are bounded by a constant factor of the number of bits in
169*53ee8cc1Swenshuai.xi   a size_t, not counting any clearing in calloc or copying in realloc,
170*53ee8cc1Swenshuai.xi   or actions surrounding MORECORE and MMAP that have times
171*53ee8cc1Swenshuai.xi   proportional to the number of non-contiguous regions returned by
172*53ee8cc1Swenshuai.xi   system allocation routines, which is often just 1.
173*53ee8cc1Swenshuai.xi 
174*53ee8cc1Swenshuai.xi   The implementation is not very modular and seriously overuses
175*53ee8cc1Swenshuai.xi   macros. Perhaps someday all C compilers will do as good a job
176*53ee8cc1Swenshuai.xi   inlining modular code as can now be done by brute-force expansion,
177*53ee8cc1Swenshuai.xi   but now, enough of them seem not to.
178*53ee8cc1Swenshuai.xi 
179*53ee8cc1Swenshuai.xi   Some compilers issue a lot of warnings about code that is
180*53ee8cc1Swenshuai.xi   dead/unreachable only on some platforms, and also about intentional
181*53ee8cc1Swenshuai.xi   uses of negation on unsigned types. All known cases of each can be
182*53ee8cc1Swenshuai.xi   ignored.
183*53ee8cc1Swenshuai.xi 
184*53ee8cc1Swenshuai.xi   For a longer but out of date high-level description, see
185*53ee8cc1Swenshuai.xi      http://gee.cs.oswego.edu/dl/html/malloc.html
186*53ee8cc1Swenshuai.xi 
187*53ee8cc1Swenshuai.xi * MSPACES
188*53ee8cc1Swenshuai.xi   If MSPACES is defined, then in addition to malloc, free, etc.,
189*53ee8cc1Swenshuai.xi   this file also defines mspace_malloc, mspace_free, etc. These
190*53ee8cc1Swenshuai.xi   are versions of malloc routines that take an "mspace" argument
191*53ee8cc1Swenshuai.xi   obtained using create_mspace, to control all internal bookkeeping.
192*53ee8cc1Swenshuai.xi   If ONLY_MSPACES is defined, only these versions are compiled.
193*53ee8cc1Swenshuai.xi   So if you would like to use this allocator for only some allocations,
194*53ee8cc1Swenshuai.xi   and your system malloc for others, you can compile with
195*53ee8cc1Swenshuai.xi   ONLY_MSPACES and then do something like...
196*53ee8cc1Swenshuai.xi     static mspace mymspace = create_mspace(0,0); // for example
197*53ee8cc1Swenshuai.xi     #define mymalloc(bytes)  mspace_malloc(mymspace, bytes)
198*53ee8cc1Swenshuai.xi 
199*53ee8cc1Swenshuai.xi   (Note: If you only need one instance of an mspace, you can instead
200*53ee8cc1Swenshuai.xi   use "USE_DL_PREFIX" to relabel the global malloc.)
201*53ee8cc1Swenshuai.xi 
202*53ee8cc1Swenshuai.xi   You can similarly create thread-local allocators by storing
203*53ee8cc1Swenshuai.xi   mspaces as thread-locals. For example:
204*53ee8cc1Swenshuai.xi     static __thread mspace tlms = 0;
205*53ee8cc1Swenshuai.xi     void*  tlmalloc(size_t bytes) {
206*53ee8cc1Swenshuai.xi       if (tlms == 0) tlms = create_mspace(0, 0);
207*53ee8cc1Swenshuai.xi       return mspace_malloc(tlms, bytes);
208*53ee8cc1Swenshuai.xi     }
209*53ee8cc1Swenshuai.xi     void  tlfree(void* mem) { mspace_free(tlms, mem); }
210*53ee8cc1Swenshuai.xi 
211*53ee8cc1Swenshuai.xi   Unless FOOTERS is defined, each mspace is completely independent.
212*53ee8cc1Swenshuai.xi   You cannot allocate from one and free to another (although
213*53ee8cc1Swenshuai.xi   conformance is only weakly checked, so usage errors are not always
214*53ee8cc1Swenshuai.xi   caught). If FOOTERS is defined, then each chunk carries around a tag
215*53ee8cc1Swenshuai.xi   indicating its originating mspace, and frees are directed to their
216*53ee8cc1Swenshuai.xi   originating spaces.
217*53ee8cc1Swenshuai.xi 
218*53ee8cc1Swenshuai.xi  -------------------------  Compile-time options ---------------------------
219*53ee8cc1Swenshuai.xi 
220*53ee8cc1Swenshuai.xi Be careful in setting #define values for numerical constants of type
221*53ee8cc1Swenshuai.xi size_t. On some systems, literal values are not automatically extended
222*53ee8cc1Swenshuai.xi to size_t precision unless they are explicitly casted.
223*53ee8cc1Swenshuai.xi 
224*53ee8cc1Swenshuai.xi WIN32                    default: defined if _WIN32 defined
225*53ee8cc1Swenshuai.xi   Defining WIN32 sets up defaults for MS environment and compilers.
226*53ee8cc1Swenshuai.xi   Otherwise defaults are for unix.
227*53ee8cc1Swenshuai.xi 
228*53ee8cc1Swenshuai.xi MALLOC_ALIGNMENT         default: (size_t)8
229*53ee8cc1Swenshuai.xi   Controls the minimum alignment for malloc'ed chunks.  It must be a
230*53ee8cc1Swenshuai.xi   power of two and at least 8, even on machines for which smaller
231*53ee8cc1Swenshuai.xi   alignments would suffice. It may be defined as larger than this
232*53ee8cc1Swenshuai.xi   though. Note however that code and data structures are optimized for
233*53ee8cc1Swenshuai.xi   the case of 8-byte alignment.
234*53ee8cc1Swenshuai.xi 
235*53ee8cc1Swenshuai.xi MSPACES                  default: 0 (false)
236*53ee8cc1Swenshuai.xi   If true, compile in support for independent allocation spaces.
237*53ee8cc1Swenshuai.xi   This is only supported if HAVE_MMAP is true.
238*53ee8cc1Swenshuai.xi 
239*53ee8cc1Swenshuai.xi ONLY_MSPACES             default: 0 (false)
240*53ee8cc1Swenshuai.xi   If true, only compile in mspace versions, not regular versions.
241*53ee8cc1Swenshuai.xi 
242*53ee8cc1Swenshuai.xi USE_LOCKS                default: 0 (false)
243*53ee8cc1Swenshuai.xi   Causes each call to each public routine to be surrounded with
244*53ee8cc1Swenshuai.xi   pthread or WIN32 mutex lock/unlock. (If set true, this can be
245*53ee8cc1Swenshuai.xi   overridden on a per-mspace basis for mspace versions.)
246*53ee8cc1Swenshuai.xi 
247*53ee8cc1Swenshuai.xi FOOTERS                  default: 0
248*53ee8cc1Swenshuai.xi   If true, provide extra checking and dispatching by placing
249*53ee8cc1Swenshuai.xi   information in the footers of allocated chunks. This adds
250*53ee8cc1Swenshuai.xi   space and time overhead.
251*53ee8cc1Swenshuai.xi 
252*53ee8cc1Swenshuai.xi INSECURE                 default: 0
253*53ee8cc1Swenshuai.xi   If true, omit checks for usage errors and heap space overwrites.
254*53ee8cc1Swenshuai.xi 
255*53ee8cc1Swenshuai.xi USE_DL_PREFIX            default: NOT defined
256*53ee8cc1Swenshuai.xi   Causes compiler to prefix all public routines with the string 'dl'.
257*53ee8cc1Swenshuai.xi   This can be useful when you only want to use this malloc in one part
258*53ee8cc1Swenshuai.xi   of a program, using your regular system malloc elsewhere.
259*53ee8cc1Swenshuai.xi 
260*53ee8cc1Swenshuai.xi ABORT                    default: defined as abort()
261*53ee8cc1Swenshuai.xi   Defines how to abort on failed checks.  On most systems, a failed
262*53ee8cc1Swenshuai.xi   check cannot die with an "assert" or even print an informative
263*53ee8cc1Swenshuai.xi   message, because the underlying print routines in turn call malloc,
264*53ee8cc1Swenshuai.xi   which will fail again.  Generally, the best policy is to simply call
265*53ee8cc1Swenshuai.xi   abort(). It's not very useful to do more than this because many
266*53ee8cc1Swenshuai.xi   errors due to overwriting will show up as address faults (null, odd
267*53ee8cc1Swenshuai.xi   addresses etc) rather than malloc-triggered checks, so will also
268*53ee8cc1Swenshuai.xi   abort.  Also, most compilers know that abort() does not return, so
269*53ee8cc1Swenshuai.xi   can better optimize code conditionally calling it.
270*53ee8cc1Swenshuai.xi 
271*53ee8cc1Swenshuai.xi PROCEED_ON_ERROR           default: defined as 0 (false)
272*53ee8cc1Swenshuai.xi   Controls whether detected bad addresses cause them to bypassed
273*53ee8cc1Swenshuai.xi   rather than aborting. If set, detected bad arguments to free and
274*53ee8cc1Swenshuai.xi   realloc are ignored. And all bookkeeping information is zeroed out
275*53ee8cc1Swenshuai.xi   upon a detected overwrite of freed heap space, thus losing the
276*53ee8cc1Swenshuai.xi   ability to ever return it from malloc again, but enabling the
277*53ee8cc1Swenshuai.xi   application to proceed. If PROCEED_ON_ERROR is defined, the
278*53ee8cc1Swenshuai.xi   static variable malloc_corruption_error_count is compiled in
279*53ee8cc1Swenshuai.xi   and can be examined to see if errors have occurred. This option
280*53ee8cc1Swenshuai.xi   generates slower code than the default abort policy.
281*53ee8cc1Swenshuai.xi 
282*53ee8cc1Swenshuai.xi DEBUG                    default: NOT defined
283*53ee8cc1Swenshuai.xi   The DEBUG setting is mainly intended for people trying to modify
284*53ee8cc1Swenshuai.xi   this code or diagnose problems when porting to new platforms.
285*53ee8cc1Swenshuai.xi   However, it may also be able to better isolate user errors than just
286*53ee8cc1Swenshuai.xi   using runtime checks.  The assertions in the check routines spell
287*53ee8cc1Swenshuai.xi   out in more detail the assumptions and invariants underlying the
288*53ee8cc1Swenshuai.xi   algorithms.  The checking is fairly extensive, and will slow down
289*53ee8cc1Swenshuai.xi   execution noticeably. Calling malloc_stats or mallinfo with DEBUG
290*53ee8cc1Swenshuai.xi   set will attempt to check every non-mmapped allocated and free chunk
291*53ee8cc1Swenshuai.xi   in the course of computing the summaries.
292*53ee8cc1Swenshuai.xi 
293*53ee8cc1Swenshuai.xi ABORT_ON_ASSERT_FAILURE   default: defined as 1 (true)
294*53ee8cc1Swenshuai.xi   Debugging assertion failures can be nearly impossible if your
295*53ee8cc1Swenshuai.xi   version of the assert macro causes malloc to be called, which will
296*53ee8cc1Swenshuai.xi   lead to a cascade of further failures, blowing the runtime stack.
297*53ee8cc1Swenshuai.xi   ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
298*53ee8cc1Swenshuai.xi   which will usually make debugging easier.
299*53ee8cc1Swenshuai.xi 
300*53ee8cc1Swenshuai.xi MALLOC_FAILURE_ACTION     default: sets errno to ENOMEM, or no-op on win32
301*53ee8cc1Swenshuai.xi   The action to take before "return 0" when malloc fails to be able to
302*53ee8cc1Swenshuai.xi   return memory because there is none available.
303*53ee8cc1Swenshuai.xi 
304*53ee8cc1Swenshuai.xi HAVE_MORECORE             default: 1 (true) unless win32 or ONLY_MSPACES
305*53ee8cc1Swenshuai.xi   True if this system supports sbrk or an emulation of it.
306*53ee8cc1Swenshuai.xi 
307*53ee8cc1Swenshuai.xi MORECORE                  default: sbrk
308*53ee8cc1Swenshuai.xi   The name of the sbrk-style system routine to call to obtain more
309*53ee8cc1Swenshuai.xi   memory.  See below for guidance on writing custom MORECORE
310*53ee8cc1Swenshuai.xi   functions. The type of the argument to sbrk/MORECORE varies across
311*53ee8cc1Swenshuai.xi   systems.  It cannot be size_t, because it supports negative
312*53ee8cc1Swenshuai.xi   arguments, so it is normally the signed type of the same width as
313*53ee8cc1Swenshuai.xi   size_t (sometimes declared as "intptr_t").  It doesn't much matter
314*53ee8cc1Swenshuai.xi   though. Internally, we only call it with arguments less than half
315*53ee8cc1Swenshuai.xi   the max value of a size_t, which should work across all reasonable
316*53ee8cc1Swenshuai.xi   possibilities, although sometimes generating compiler warnings.  See
317*53ee8cc1Swenshuai.xi   near the end of this file for guidelines for creating a custom
318*53ee8cc1Swenshuai.xi   version of MORECORE.
319*53ee8cc1Swenshuai.xi 
320*53ee8cc1Swenshuai.xi MORECORE_CONTIGUOUS       default: 1 (true)
321*53ee8cc1Swenshuai.xi   If true, take advantage of fact that consecutive calls to MORECORE
322*53ee8cc1Swenshuai.xi   with positive arguments always return contiguous increasing
323*53ee8cc1Swenshuai.xi   addresses.  This is true of unix sbrk. It does not hurt too much to
324*53ee8cc1Swenshuai.xi   set it true anyway, since malloc copes with non-contiguities.
325*53ee8cc1Swenshuai.xi   Setting it false when definitely non-contiguous saves time
326*53ee8cc1Swenshuai.xi   and possibly wasted space it would take to discover this though.
327*53ee8cc1Swenshuai.xi 
328*53ee8cc1Swenshuai.xi MORECORE_CANNOT_TRIM      default: NOT defined
329*53ee8cc1Swenshuai.xi   True if MORECORE cannot release space back to the system when given
330*53ee8cc1Swenshuai.xi   negative arguments. This is generally necessary only if you are
331*53ee8cc1Swenshuai.xi   using a hand-crafted MORECORE function that cannot handle negative
332*53ee8cc1Swenshuai.xi   arguments.
333*53ee8cc1Swenshuai.xi 
334*53ee8cc1Swenshuai.xi HAVE_MMAP                 default: 1 (true)
335*53ee8cc1Swenshuai.xi   True if this system supports mmap or an emulation of it.  If so, and
336*53ee8cc1Swenshuai.xi   HAVE_MORECORE is not true, MMAP is used for all system
337*53ee8cc1Swenshuai.xi   allocation. If set and HAVE_MORECORE is true as well, MMAP is
338*53ee8cc1Swenshuai.xi   primarily used to directly allocate very large blocks. It is also
339*53ee8cc1Swenshuai.xi   used as a backup strategy in cases where MORECORE fails to provide
340*53ee8cc1Swenshuai.xi   space from system. Note: A single call to MUNMAP is assumed to be
341*53ee8cc1Swenshuai.xi   able to unmap memory that may have be allocated using multiple calls
342*53ee8cc1Swenshuai.xi   to MMAP, so long as they are adjacent.
343*53ee8cc1Swenshuai.xi 
344*53ee8cc1Swenshuai.xi HAVE_MREMAP               default: 1 on linux, else 0
345*53ee8cc1Swenshuai.xi   If true realloc() uses mremap() to re-allocate large blocks and
346*53ee8cc1Swenshuai.xi   extend or shrink allocation spaces.
347*53ee8cc1Swenshuai.xi 
348*53ee8cc1Swenshuai.xi MMAP_CLEARS               default: 1 on unix
349*53ee8cc1Swenshuai.xi   True if mmap clears memory so calloc doesn't need to. This is true
350*53ee8cc1Swenshuai.xi   for standard unix mmap using /dev/zero.
351*53ee8cc1Swenshuai.xi 
352*53ee8cc1Swenshuai.xi USE_BUILTIN_FFS            default: 0 (i.e., not used)
353*53ee8cc1Swenshuai.xi   Causes malloc to use the builtin ffs() function to compute indices.
354*53ee8cc1Swenshuai.xi   Some compilers may recognize and intrinsify ffs to be faster than the
355*53ee8cc1Swenshuai.xi   supplied C version. Also, the case of x86 using gcc is special-cased
356*53ee8cc1Swenshuai.xi   to an asm instruction, so is already as fast as it can be, and so
357*53ee8cc1Swenshuai.xi   this setting has no effect. (On most x86s, the asm version is only
358*53ee8cc1Swenshuai.xi   slightly faster than the C version.)
359*53ee8cc1Swenshuai.xi 
360*53ee8cc1Swenshuai.xi malloc_getpagesize         default: derive from system includes, or 4096.
361*53ee8cc1Swenshuai.xi   The system page size. To the extent possible, this malloc manages
362*53ee8cc1Swenshuai.xi   memory from the system in page-size units.  This may be (and
363*53ee8cc1Swenshuai.xi   usually is) a function rather than a constant. This is ignored
364*53ee8cc1Swenshuai.xi   if WIN32, where page size is determined using getSystemInfo during
365*53ee8cc1Swenshuai.xi   initialization.
366*53ee8cc1Swenshuai.xi 
367*53ee8cc1Swenshuai.xi USE_DEV_RANDOM             default: 0 (i.e., not used)
368*53ee8cc1Swenshuai.xi   Causes malloc to use /dev/random to initialize secure magic seed for
369*53ee8cc1Swenshuai.xi   stamping footers. Otherwise, the current time is used.
370*53ee8cc1Swenshuai.xi 
371*53ee8cc1Swenshuai.xi NO_MALLINFO                default: 0
372*53ee8cc1Swenshuai.xi   If defined, don't compile "mallinfo". This can be a simple way
373*53ee8cc1Swenshuai.xi   of dealing with mismatches between system declarations and
374*53ee8cc1Swenshuai.xi   those in this file.
375*53ee8cc1Swenshuai.xi 
376*53ee8cc1Swenshuai.xi MALLINFO_FIELD_TYPE        default: size_t
377*53ee8cc1Swenshuai.xi   The type of the fields in the mallinfo struct. This was originally
378*53ee8cc1Swenshuai.xi   defined as "int" in SVID etc, but is more usefully defined as
379*53ee8cc1Swenshuai.xi   size_t. The value is used only if  HAVE_USR_INCLUDE_MALLOC_H is not set
380*53ee8cc1Swenshuai.xi 
381*53ee8cc1Swenshuai.xi REALLOC_ZERO_BYTES_FREES    default: not defined
382*53ee8cc1Swenshuai.xi   This should be set if a call to realloc with zero bytes should
383*53ee8cc1Swenshuai.xi   be the same as a call to free. Some people think it should. Otherwise,
384*53ee8cc1Swenshuai.xi   since this malloc returns a unique pointer for malloc(0), so does
385*53ee8cc1Swenshuai.xi   realloc(p, 0).
386*53ee8cc1Swenshuai.xi 
387*53ee8cc1Swenshuai.xi LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
388*53ee8cc1Swenshuai.xi LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H,  LACKS_ERRNO_H
389*53ee8cc1Swenshuai.xi LACKS_STDLIB_H                default: NOT defined unless on WIN32
390*53ee8cc1Swenshuai.xi   Define these if your system does not have these header files.
391*53ee8cc1Swenshuai.xi   You might need to manually insert some of the declarations they provide.
392*53ee8cc1Swenshuai.xi 
393*53ee8cc1Swenshuai.xi DEFAULT_GRANULARITY        default: page size if MORECORE_CONTIGUOUS,
394*53ee8cc1Swenshuai.xi                                 system_info.dwAllocationGranularity in WIN32,
395*53ee8cc1Swenshuai.xi                                 otherwise 64K.
396*53ee8cc1Swenshuai.xi       Also settable using mallopt(M_GRANULARITY, x)
397*53ee8cc1Swenshuai.xi   The unit for allocating and deallocating memory from the system.  On
398*53ee8cc1Swenshuai.xi   most systems with contiguous MORECORE, there is no reason to
399*53ee8cc1Swenshuai.xi   make this more than a page. However, systems with MMAP tend to
400*53ee8cc1Swenshuai.xi   either require or encourage larger granularities.  You can increase
401*53ee8cc1Swenshuai.xi   this value to prevent system allocation functions to be called so
402*53ee8cc1Swenshuai.xi   often, especially if they are slow.  The value must be at least one
403*53ee8cc1Swenshuai.xi   page and must be a power of two.  Setting to 0 causes initialization
404*53ee8cc1Swenshuai.xi   to either page size or win32 region size.  (Note: In previous
405*53ee8cc1Swenshuai.xi   versions of malloc, the equivalent of this option was called
406*53ee8cc1Swenshuai.xi   "TOP_PAD")
407*53ee8cc1Swenshuai.xi 
408*53ee8cc1Swenshuai.xi DEFAULT_TRIM_THRESHOLD    default: 2MB
409*53ee8cc1Swenshuai.xi       Also settable using mallopt(M_TRIM_THRESHOLD, x)
410*53ee8cc1Swenshuai.xi   The maximum amount of unused top-most memory to keep before
411*53ee8cc1Swenshuai.xi   releasing via malloc_trim in free().  Automatic trimming is mainly
412*53ee8cc1Swenshuai.xi   useful in long-lived programs using contiguous MORECORE.  Because
413*53ee8cc1Swenshuai.xi   trimming via sbrk can be slow on some systems, and can sometimes be
414*53ee8cc1Swenshuai.xi   wasteful (in cases where programs immediately afterward allocate
415*53ee8cc1Swenshuai.xi   more large chunks) the value should be high enough so that your
416*53ee8cc1Swenshuai.xi   overall system performance would improve by releasing this much
417*53ee8cc1Swenshuai.xi   memory.  As a rough guide, you might set to a value close to the
418*53ee8cc1Swenshuai.xi   average size of a process (program) running on your system.
419*53ee8cc1Swenshuai.xi   Releasing this much memory would allow such a process to run in
420*53ee8cc1Swenshuai.xi   memory.  Generally, it is worth tuning trim thresholds when a
421*53ee8cc1Swenshuai.xi   program undergoes phases where several large chunks are allocated
422*53ee8cc1Swenshuai.xi   and released in ways that can reuse each other's storage, perhaps
423*53ee8cc1Swenshuai.xi   mixed with phases where there are no such chunks at all. The trim
424*53ee8cc1Swenshuai.xi   value must be greater than page size to have any useful effect.  To
425*53ee8cc1Swenshuai.xi   disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
426*53ee8cc1Swenshuai.xi   some people use of mallocing a huge space and then freeing it at
427*53ee8cc1Swenshuai.xi   program startup, in an attempt to reserve system memory, doesn't
428*53ee8cc1Swenshuai.xi   have the intended effect under automatic trimming, since that memory
429*53ee8cc1Swenshuai.xi   will immediately be returned to the system.
430*53ee8cc1Swenshuai.xi 
431*53ee8cc1Swenshuai.xi DEFAULT_MMAP_THRESHOLD       default: 256K
432*53ee8cc1Swenshuai.xi       Also settable using mallopt(M_MMAP_THRESHOLD, x)
433*53ee8cc1Swenshuai.xi   The request size threshold for using MMAP to directly service a
434*53ee8cc1Swenshuai.xi   request. Requests of at least this size that cannot be allocated
435*53ee8cc1Swenshuai.xi   using already-existing space will be serviced via mmap.  (If enough
436*53ee8cc1Swenshuai.xi   normal freed space already exists it is used instead.)  Using mmap
437*53ee8cc1Swenshuai.xi   segregates relatively large chunks of memory so that they can be
438*53ee8cc1Swenshuai.xi   individually obtained and released from the host system. A request
439*53ee8cc1Swenshuai.xi   serviced through mmap is never reused by any other request (at least
440*53ee8cc1Swenshuai.xi   not directly; the system may just so happen to remap successive
441*53ee8cc1Swenshuai.xi   requests to the same locations).  Segregating space in this way has
442*53ee8cc1Swenshuai.xi   the benefits that: Mmapped space can always be individually released
443*53ee8cc1Swenshuai.xi   back to the system, which helps keep the system level memory demands
444*53ee8cc1Swenshuai.xi   of a long-lived program low.  Also, mapped memory doesn't become
445*53ee8cc1Swenshuai.xi   `locked' between other chunks, as can happen with normally allocated
446*53ee8cc1Swenshuai.xi   chunks, which means that even trimming via malloc_trim would not
447*53ee8cc1Swenshuai.xi   release them.  However, it has the disadvantage that the space
448*53ee8cc1Swenshuai.xi   cannot be reclaimed, consolidated, and then used to service later
449*53ee8cc1Swenshuai.xi   requests, as happens with normal chunks.  The advantages of mmap
450*53ee8cc1Swenshuai.xi   nearly always outweigh disadvantages for "large" chunks, but the
451*53ee8cc1Swenshuai.xi   value of "large" may vary across systems.  The default is an
452*53ee8cc1Swenshuai.xi   empirically derived value that works well in most systems. You can
453*53ee8cc1Swenshuai.xi   disable mmap by setting to MAX_SIZE_T.
454*53ee8cc1Swenshuai.xi 
455*53ee8cc1Swenshuai.xi */
456*53ee8cc1Swenshuai.xi 
457*53ee8cc1Swenshuai.xi #ifndef WIN32
458*53ee8cc1Swenshuai.xi #ifdef _WIN32
459*53ee8cc1Swenshuai.xi #define WIN32 1
460*53ee8cc1Swenshuai.xi #endif  /* _WIN32 */
461*53ee8cc1Swenshuai.xi #endif  /* WIN32 */
462*53ee8cc1Swenshuai.xi #ifdef WIN32
463*53ee8cc1Swenshuai.xi #define WIN32_LEAN_AND_MEAN
464*53ee8cc1Swenshuai.xi #include <windows.h>
465*53ee8cc1Swenshuai.xi #define HAVE_MMAP 1
466*53ee8cc1Swenshuai.xi #define HAVE_MORECORE 0
467*53ee8cc1Swenshuai.xi #define LACKS_UNISTD_H
468*53ee8cc1Swenshuai.xi #define LACKS_SYS_PARAM_H
469*53ee8cc1Swenshuai.xi #define LACKS_SYS_MMAN_H
470*53ee8cc1Swenshuai.xi #define LACKS_STRING_H
471*53ee8cc1Swenshuai.xi #define LACKS_STRINGS_H
472*53ee8cc1Swenshuai.xi #define LACKS_SYS_TYPES_H
473*53ee8cc1Swenshuai.xi #define LACKS_ERRNO_H
474*53ee8cc1Swenshuai.xi #define MALLOC_FAILURE_ACTION
475*53ee8cc1Swenshuai.xi #define MMAP_CLEARS 0 /* WINCE and some others apparently don't clear */
476*53ee8cc1Swenshuai.xi #endif  /* WIN32 */
477*53ee8cc1Swenshuai.xi 
478*53ee8cc1Swenshuai.xi #if defined(DARWIN) || defined(_DARWIN)
479*53ee8cc1Swenshuai.xi /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
480*53ee8cc1Swenshuai.xi #ifndef HAVE_MORECORE
481*53ee8cc1Swenshuai.xi #define HAVE_MORECORE 0
482*53ee8cc1Swenshuai.xi #define HAVE_MMAP 1
483*53ee8cc1Swenshuai.xi #endif  /* HAVE_MORECORE */
484*53ee8cc1Swenshuai.xi #endif  /* DARWIN */
485*53ee8cc1Swenshuai.xi 
486*53ee8cc1Swenshuai.xi #ifndef LACKS_SYS_TYPES_H
487*53ee8cc1Swenshuai.xi #include <sys/types.h>  /* For size_t */
488*53ee8cc1Swenshuai.xi #endif  /* LACKS_SYS_TYPES_H */
489*53ee8cc1Swenshuai.xi 
490*53ee8cc1Swenshuai.xi /* The maximum possible size_t value has all bits set */
491*53ee8cc1Swenshuai.xi #define MAX_SIZE_T           (~(size_t)0)
492*53ee8cc1Swenshuai.xi 
493*53ee8cc1Swenshuai.xi #ifndef ONLY_MSPACES
494*53ee8cc1Swenshuai.xi #define ONLY_MSPACES 0
495*53ee8cc1Swenshuai.xi #endif  /* ONLY_MSPACES */
496*53ee8cc1Swenshuai.xi #ifndef MSPACES
497*53ee8cc1Swenshuai.xi #if ONLY_MSPACES
498*53ee8cc1Swenshuai.xi #define MSPACES 1
499*53ee8cc1Swenshuai.xi #else   /* ONLY_MSPACES */
500*53ee8cc1Swenshuai.xi #define MSPACES 0
501*53ee8cc1Swenshuai.xi #endif  /* ONLY_MSPACES */
502*53ee8cc1Swenshuai.xi #endif  /* MSPACES */
503*53ee8cc1Swenshuai.xi #ifndef MALLOC_ALIGNMENT
504*53ee8cc1Swenshuai.xi #define MALLOC_ALIGNMENT 8  //((size_t)8U)
505*53ee8cc1Swenshuai.xi #endif  /* MALLOC_ALIGNMENT */
506*53ee8cc1Swenshuai.xi #ifndef FOOTERS
507*53ee8cc1Swenshuai.xi #define FOOTERS 0
508*53ee8cc1Swenshuai.xi #endif  /* FOOTERS */
509*53ee8cc1Swenshuai.xi #ifndef ABORT
510*53ee8cc1Swenshuai.xi #define ABORT  abort()
511*53ee8cc1Swenshuai.xi #endif  /* ABORT */
512*53ee8cc1Swenshuai.xi #ifndef ABORT_ON_ASSERT_FAILURE
513*53ee8cc1Swenshuai.xi #define ABORT_ON_ASSERT_FAILURE 1
514*53ee8cc1Swenshuai.xi #endif  /* ABORT_ON_ASSERT_FAILURE */
515*53ee8cc1Swenshuai.xi #ifndef PROCEED_ON_ERROR
516*53ee8cc1Swenshuai.xi #define PROCEED_ON_ERROR 0
517*53ee8cc1Swenshuai.xi #endif  /* PROCEED_ON_ERROR */
518*53ee8cc1Swenshuai.xi #ifndef USE_LOCKS
519*53ee8cc1Swenshuai.xi #define USE_LOCKS 1
520*53ee8cc1Swenshuai.xi #endif  /* USE_LOCKS */
521*53ee8cc1Swenshuai.xi #ifndef INSECURE
522*53ee8cc1Swenshuai.xi #define INSECURE 0
523*53ee8cc1Swenshuai.xi #endif  /* INSECURE */
524*53ee8cc1Swenshuai.xi #ifndef HAVE_MMAP
525*53ee8cc1Swenshuai.xi #define HAVE_MMAP 1
526*53ee8cc1Swenshuai.xi #endif  /* HAVE_MMAP */
527*53ee8cc1Swenshuai.xi #ifndef MMAP_CLEARS
528*53ee8cc1Swenshuai.xi #define MMAP_CLEARS 1
529*53ee8cc1Swenshuai.xi #endif  /* MMAP_CLEARS */
530*53ee8cc1Swenshuai.xi #ifndef HAVE_MREMAP
531*53ee8cc1Swenshuai.xi #ifdef linux
532*53ee8cc1Swenshuai.xi #define HAVE_MREMAP 1
533*53ee8cc1Swenshuai.xi #else   /* linux */
534*53ee8cc1Swenshuai.xi #define HAVE_MREMAP 0
535*53ee8cc1Swenshuai.xi #endif  /* linux */
536*53ee8cc1Swenshuai.xi #endif  /* HAVE_MREMAP */
537*53ee8cc1Swenshuai.xi #ifndef MALLOC_FAILURE_ACTION
538*53ee8cc1Swenshuai.xi #define MALLOC_FAILURE_ACTION  errno = ENOMEM;
539*53ee8cc1Swenshuai.xi #endif  /* MALLOC_FAILURE_ACTION */
540*53ee8cc1Swenshuai.xi #ifndef HAVE_MORECORE
541*53ee8cc1Swenshuai.xi #if ONLY_MSPACES
542*53ee8cc1Swenshuai.xi #define HAVE_MORECORE 0
543*53ee8cc1Swenshuai.xi #else   /* ONLY_MSPACES */
544*53ee8cc1Swenshuai.xi #define HAVE_MORECORE 1
545*53ee8cc1Swenshuai.xi #endif  /* ONLY_MSPACES */
546*53ee8cc1Swenshuai.xi #endif  /* HAVE_MORECORE */
547*53ee8cc1Swenshuai.xi #if !HAVE_MORECORE
548*53ee8cc1Swenshuai.xi #define MORECORE_CONTIGUOUS 0
549*53ee8cc1Swenshuai.xi #else   /* !HAVE_MORECORE */
550*53ee8cc1Swenshuai.xi #ifndef MORECORE
551*53ee8cc1Swenshuai.xi #define MORECORE sbrk
552*53ee8cc1Swenshuai.xi #endif  /* MORECORE */
553*53ee8cc1Swenshuai.xi #ifndef MORECORE_CONTIGUOUS
554*53ee8cc1Swenshuai.xi #define MORECORE_CONTIGUOUS 1
555*53ee8cc1Swenshuai.xi #endif  /* MORECORE_CONTIGUOUS */
556*53ee8cc1Swenshuai.xi #endif  /* HAVE_MORECORE */
557*53ee8cc1Swenshuai.xi #ifndef DEFAULT_GRANULARITY
558*53ee8cc1Swenshuai.xi #if MORECORE_CONTIGUOUS
559*53ee8cc1Swenshuai.xi #define DEFAULT_GRANULARITY (0)  /* 0 means to compute in init_mparams */
560*53ee8cc1Swenshuai.xi #else   /* MORECORE_CONTIGUOUS */
561*53ee8cc1Swenshuai.xi #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
562*53ee8cc1Swenshuai.xi #endif  /* MORECORE_CONTIGUOUS */
563*53ee8cc1Swenshuai.xi #endif  /* DEFAULT_GRANULARITY */
564*53ee8cc1Swenshuai.xi #ifndef DEFAULT_TRIM_THRESHOLD
565*53ee8cc1Swenshuai.xi #ifndef MORECORE_CANNOT_TRIM
566*53ee8cc1Swenshuai.xi #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
567*53ee8cc1Swenshuai.xi #else   /* MORECORE_CANNOT_TRIM */
568*53ee8cc1Swenshuai.xi #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
569*53ee8cc1Swenshuai.xi #endif  /* MORECORE_CANNOT_TRIM */
570*53ee8cc1Swenshuai.xi #endif  /* DEFAULT_TRIM_THRESHOLD */
571*53ee8cc1Swenshuai.xi #ifndef DEFAULT_MMAP_THRESHOLD
572*53ee8cc1Swenshuai.xi #if HAVE_MMAP
573*53ee8cc1Swenshuai.xi #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
574*53ee8cc1Swenshuai.xi #else   /* HAVE_MMAP */
575*53ee8cc1Swenshuai.xi #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
576*53ee8cc1Swenshuai.xi #endif  /* HAVE_MMAP */
577*53ee8cc1Swenshuai.xi #endif  /* DEFAULT_MMAP_THRESHOLD */
578*53ee8cc1Swenshuai.xi #ifndef USE_BUILTIN_FFS
579*53ee8cc1Swenshuai.xi #define USE_BUILTIN_FFS 0
580*53ee8cc1Swenshuai.xi #endif  /* USE_BUILTIN_FFS */
581*53ee8cc1Swenshuai.xi #ifndef USE_DEV_RANDOM
582*53ee8cc1Swenshuai.xi #define USE_DEV_RANDOM 0
583*53ee8cc1Swenshuai.xi #endif  /* USE_DEV_RANDOM */
584*53ee8cc1Swenshuai.xi #ifndef NO_MALLINFO
585*53ee8cc1Swenshuai.xi #define NO_MALLINFO 0
586*53ee8cc1Swenshuai.xi #endif  /* NO_MALLINFO */
587*53ee8cc1Swenshuai.xi #ifndef MALLINFO_FIELD_TYPE
588*53ee8cc1Swenshuai.xi #define MALLINFO_FIELD_TYPE size_t
589*53ee8cc1Swenshuai.xi #endif  /* MALLINFO_FIELD_TYPE */
590*53ee8cc1Swenshuai.xi 
591*53ee8cc1Swenshuai.xi /*
592*53ee8cc1Swenshuai.xi   mallopt tuning options.  SVID/XPG defines four standard parameter
593*53ee8cc1Swenshuai.xi   numbers for mallopt, normally defined in malloc.h.  None of these
594*53ee8cc1Swenshuai.xi   are used in this malloc, so setting them has no effect. But this
595*53ee8cc1Swenshuai.xi   malloc does support the following options.
596*53ee8cc1Swenshuai.xi */
597*53ee8cc1Swenshuai.xi 
598*53ee8cc1Swenshuai.xi #define M_TRIM_THRESHOLD     (-1)
599*53ee8cc1Swenshuai.xi #define M_GRANULARITY        (-2)
600*53ee8cc1Swenshuai.xi #define M_MMAP_THRESHOLD     (-3)
601*53ee8cc1Swenshuai.xi 
602*53ee8cc1Swenshuai.xi /* ------------------------ Mallinfo declarations ------------------------ */
603*53ee8cc1Swenshuai.xi 
604*53ee8cc1Swenshuai.xi #if !NO_MALLINFO
605*53ee8cc1Swenshuai.xi /*
606*53ee8cc1Swenshuai.xi   This version of malloc supports the standard SVID/XPG mallinfo
607*53ee8cc1Swenshuai.xi   routine that returns a struct containing usage properties and
608*53ee8cc1Swenshuai.xi   statistics. It should work on any system that has a
609*53ee8cc1Swenshuai.xi   /usr/include/malloc.h defining struct mallinfo.  The main
610*53ee8cc1Swenshuai.xi   declaration needed is the mallinfo struct that is returned (by-copy)
611*53ee8cc1Swenshuai.xi   by mallinfo().  The malloinfo struct contains a bunch of fields that
612*53ee8cc1Swenshuai.xi   are not even meaningful in this version of malloc.  These fields are
613*53ee8cc1Swenshuai.xi   are instead filled by mallinfo() with other numbers that might be of
614*53ee8cc1Swenshuai.xi   interest.
615*53ee8cc1Swenshuai.xi 
616*53ee8cc1Swenshuai.xi   HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
617*53ee8cc1Swenshuai.xi   /usr/include/malloc.h file that includes a declaration of struct
618*53ee8cc1Swenshuai.xi   mallinfo.  If so, it is included; else a compliant version is
619*53ee8cc1Swenshuai.xi   declared below.  These must be precisely the same for mallinfo() to
620*53ee8cc1Swenshuai.xi   work.  The original SVID version of this struct, defined on most
621*53ee8cc1Swenshuai.xi   systems with mallinfo, declares all fields as ints. But some others
622*53ee8cc1Swenshuai.xi   define as unsigned long. If your system defines the fields using a
623*53ee8cc1Swenshuai.xi   type of different width than listed here, you MUST #include your
624*53ee8cc1Swenshuai.xi   system version and #define HAVE_USR_INCLUDE_MALLOC_H.
625*53ee8cc1Swenshuai.xi */
626*53ee8cc1Swenshuai.xi 
627*53ee8cc1Swenshuai.xi /* #define HAVE_USR_INCLUDE_MALLOC_H */
628*53ee8cc1Swenshuai.xi 
629*53ee8cc1Swenshuai.xi #ifdef HAVE_USR_INCLUDE_MALLOC_H
630*53ee8cc1Swenshuai.xi #include "/usr/include/malloc.h"
631*53ee8cc1Swenshuai.xi #else /* HAVE_USR_INCLUDE_MALLOC_H */
632*53ee8cc1Swenshuai.xi 
633*53ee8cc1Swenshuai.xi struct mallinfo {
634*53ee8cc1Swenshuai.xi   MALLINFO_FIELD_TYPE arena;    /* non-mmapped space allocated from system */
635*53ee8cc1Swenshuai.xi   MALLINFO_FIELD_TYPE ordblks;  /* number of free chunks */
636*53ee8cc1Swenshuai.xi   MALLINFO_FIELD_TYPE smblks;   /* always 0 */
637*53ee8cc1Swenshuai.xi   MALLINFO_FIELD_TYPE hblks;    /* always 0 */
638*53ee8cc1Swenshuai.xi   MALLINFO_FIELD_TYPE hblkhd;   /* space in mmapped regions */
639*53ee8cc1Swenshuai.xi   MALLINFO_FIELD_TYPE usmblks;  /* maximum total allocated space */
640*53ee8cc1Swenshuai.xi   MALLINFO_FIELD_TYPE fsmblks;  /* always 0 */
641*53ee8cc1Swenshuai.xi   MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
642*53ee8cc1Swenshuai.xi   MALLINFO_FIELD_TYPE fordblks; /* total free space */
643*53ee8cc1Swenshuai.xi   MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
644*53ee8cc1Swenshuai.xi };
645*53ee8cc1Swenshuai.xi 
646*53ee8cc1Swenshuai.xi #endif /* HAVE_USR_INCLUDE_MALLOC_H */
647*53ee8cc1Swenshuai.xi #endif /* NO_MALLINFO */
648*53ee8cc1Swenshuai.xi 
649*53ee8cc1Swenshuai.xi #ifdef __cplusplus
650*53ee8cc1Swenshuai.xi extern "C" {
651*53ee8cc1Swenshuai.xi #endif /* __cplusplus */
652*53ee8cc1Swenshuai.xi 
653*53ee8cc1Swenshuai.xi #if !ONLY_MSPACES
654*53ee8cc1Swenshuai.xi 
655*53ee8cc1Swenshuai.xi /* ------------------- Declarations of public routines ------------------- */
656*53ee8cc1Swenshuai.xi 
657*53ee8cc1Swenshuai.xi #ifndef USE_DL_PREFIX
658*53ee8cc1Swenshuai.xi #define dlcalloc               calloc
659*53ee8cc1Swenshuai.xi #define dlfree                 free
660*53ee8cc1Swenshuai.xi #define dlmalloc               malloc
661*53ee8cc1Swenshuai.xi #define dlmemalign             memalign
662*53ee8cc1Swenshuai.xi #define dlrealloc              realloc
663*53ee8cc1Swenshuai.xi #define dlvalloc               valloc
664*53ee8cc1Swenshuai.xi #define dlpvalloc              pvalloc
665*53ee8cc1Swenshuai.xi #define dlmallinfo             mallinfo
666*53ee8cc1Swenshuai.xi #define dlmallopt              mallopt
667*53ee8cc1Swenshuai.xi #define dlmalloc_trim          malloc_trim
668*53ee8cc1Swenshuai.xi #define dlmalloc_stats         malloc_stats
669*53ee8cc1Swenshuai.xi #define dlmalloc_usable_size   malloc_usable_size
670*53ee8cc1Swenshuai.xi #define dlmalloc_footprint     malloc_footprint
671*53ee8cc1Swenshuai.xi #define dlmalloc_max_footprint malloc_max_footprint
672*53ee8cc1Swenshuai.xi #define dlindependent_calloc   independent_calloc
673*53ee8cc1Swenshuai.xi #define dlindependent_comalloc independent_comalloc
674*53ee8cc1Swenshuai.xi #endif /* USE_DL_PREFIX */
675*53ee8cc1Swenshuai.xi 
676*53ee8cc1Swenshuai.xi 
677*53ee8cc1Swenshuai.xi /*
678*53ee8cc1Swenshuai.xi   malloc(size_t n)
679*53ee8cc1Swenshuai.xi   Returns a pointer to a newly allocated chunk of at least n bytes, or
680*53ee8cc1Swenshuai.xi   null if no space is available, in which case errno is set to ENOMEM
681*53ee8cc1Swenshuai.xi   on ANSI C systems.
682*53ee8cc1Swenshuai.xi 
683*53ee8cc1Swenshuai.xi   If n is zero, malloc returns a minimum-sized chunk. (The minimum
684*53ee8cc1Swenshuai.xi   size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
685*53ee8cc1Swenshuai.xi   systems.)  Note that size_t is an unsigned type, so calls with
686*53ee8cc1Swenshuai.xi   arguments that would be negative if signed are interpreted as
687*53ee8cc1Swenshuai.xi   requests for huge amounts of space, which will often fail. The
688*53ee8cc1Swenshuai.xi   maximum supported value of n differs across systems, but is in all
689*53ee8cc1Swenshuai.xi   cases less than the maximum representable value of a size_t.
690*53ee8cc1Swenshuai.xi */
691*53ee8cc1Swenshuai.xi void* dlmalloc(size_t);
692*53ee8cc1Swenshuai.xi 
693*53ee8cc1Swenshuai.xi /*
694*53ee8cc1Swenshuai.xi   free(void* p)
695*53ee8cc1Swenshuai.xi   Releases the chunk of memory pointed to by p, that had been previously
696*53ee8cc1Swenshuai.xi   allocated using malloc or a related routine such as realloc.
697*53ee8cc1Swenshuai.xi   It has no effect if p is null. If p was not malloced or already
698*53ee8cc1Swenshuai.xi   freed, free(p) will by default cause the current program to abort.
699*53ee8cc1Swenshuai.xi */
700*53ee8cc1Swenshuai.xi void  dlfree(void*);
701*53ee8cc1Swenshuai.xi 
702*53ee8cc1Swenshuai.xi /*
703*53ee8cc1Swenshuai.xi   calloc(size_t n_elements, size_t element_size);
704*53ee8cc1Swenshuai.xi   Returns a pointer to n_elements * element_size bytes, with all locations
705*53ee8cc1Swenshuai.xi   set to zero.
706*53ee8cc1Swenshuai.xi */
707*53ee8cc1Swenshuai.xi void* dlcalloc(size_t, size_t);
708*53ee8cc1Swenshuai.xi 
709*53ee8cc1Swenshuai.xi /*
710*53ee8cc1Swenshuai.xi   realloc(void* p, size_t n)
711*53ee8cc1Swenshuai.xi   Returns a pointer to a chunk of size n that contains the same data
712*53ee8cc1Swenshuai.xi   as does chunk p up to the minimum of (n, p's size) bytes, or null
713*53ee8cc1Swenshuai.xi   if no space is available.
714*53ee8cc1Swenshuai.xi 
715*53ee8cc1Swenshuai.xi   The returned pointer may or may not be the same as p. The algorithm
716*53ee8cc1Swenshuai.xi   prefers extending p in most cases when possible, otherwise it
717*53ee8cc1Swenshuai.xi   employs the equivalent of a malloc-copy-free sequence.
718*53ee8cc1Swenshuai.xi 
719*53ee8cc1Swenshuai.xi   If p is null, realloc is equivalent to malloc.
720*53ee8cc1Swenshuai.xi 
721*53ee8cc1Swenshuai.xi   If space is not available, realloc returns null, errno is set (if on
722*53ee8cc1Swenshuai.xi   ANSI) and p is NOT freed.
723*53ee8cc1Swenshuai.xi 
724*53ee8cc1Swenshuai.xi   if n is for fewer bytes than already held by p, the newly unused
725*53ee8cc1Swenshuai.xi   space is lopped off and freed if possible.  realloc with a size
726*53ee8cc1Swenshuai.xi   argument of zero (re)allocates a minimum-sized chunk.
727*53ee8cc1Swenshuai.xi 
728*53ee8cc1Swenshuai.xi   The old unix realloc convention of allowing the last-free'd chunk
729*53ee8cc1Swenshuai.xi   to be used as an argument to realloc is not supported.
730*53ee8cc1Swenshuai.xi */
731*53ee8cc1Swenshuai.xi 
732*53ee8cc1Swenshuai.xi void* dlrealloc(void*, size_t);
733*53ee8cc1Swenshuai.xi 
734*53ee8cc1Swenshuai.xi /*
735*53ee8cc1Swenshuai.xi   memalign(size_t alignment, size_t n);
736*53ee8cc1Swenshuai.xi   Returns a pointer to a newly allocated chunk of n bytes, aligned
737*53ee8cc1Swenshuai.xi   in accord with the alignment argument.
738*53ee8cc1Swenshuai.xi 
739*53ee8cc1Swenshuai.xi   The alignment argument should be a power of two. If the argument is
740*53ee8cc1Swenshuai.xi   not a power of two, the nearest greater power is used.
741*53ee8cc1Swenshuai.xi   8-byte alignment is guaranteed by normal malloc calls, so don't
742*53ee8cc1Swenshuai.xi   bother calling memalign with an argument of 8 or less.
743*53ee8cc1Swenshuai.xi 
744*53ee8cc1Swenshuai.xi   Overreliance on memalign is a sure way to fragment space.
745*53ee8cc1Swenshuai.xi */
746*53ee8cc1Swenshuai.xi void* dlmemalign(size_t, size_t);
747*53ee8cc1Swenshuai.xi 
748*53ee8cc1Swenshuai.xi /*
749*53ee8cc1Swenshuai.xi   valloc(size_t n);
750*53ee8cc1Swenshuai.xi   Equivalent to memalign(pagesize, n), where pagesize is the page
751*53ee8cc1Swenshuai.xi   size of the system. If the pagesize is unknown, 4096 is used.
752*53ee8cc1Swenshuai.xi */
753*53ee8cc1Swenshuai.xi void* dlvalloc(size_t);
754*53ee8cc1Swenshuai.xi 
755*53ee8cc1Swenshuai.xi /*
756*53ee8cc1Swenshuai.xi   mallopt(int parameter_number, int parameter_value)
757*53ee8cc1Swenshuai.xi   Sets tunable parameters The format is to provide a
758*53ee8cc1Swenshuai.xi   (parameter-number, parameter-value) pair.  mallopt then sets the
759*53ee8cc1Swenshuai.xi   corresponding parameter to the argument value if it can (i.e., so
760*53ee8cc1Swenshuai.xi   long as the value is meaningful), and returns 1 if successful else
761*53ee8cc1Swenshuai.xi   0.  SVID/XPG/ANSI defines four standard param numbers for mallopt,
762*53ee8cc1Swenshuai.xi   normally defined in malloc.h.  None of these are use in this malloc,
763*53ee8cc1Swenshuai.xi   so setting them has no effect. But this malloc also supports other
764*53ee8cc1Swenshuai.xi   options in mallopt. See below for details.  Briefly, supported
765*53ee8cc1Swenshuai.xi   parameters are as follows (listed defaults are for "typical"
766*53ee8cc1Swenshuai.xi   configurations).
767*53ee8cc1Swenshuai.xi 
768*53ee8cc1Swenshuai.xi   Symbol            param #  default    allowed param values
769*53ee8cc1Swenshuai.xi   M_TRIM_THRESHOLD     -1   2*1024*1024   any   (MAX_SIZE_T disables)
770*53ee8cc1Swenshuai.xi   M_GRANULARITY        -2     page size   any power of 2 >= page size
771*53ee8cc1Swenshuai.xi   M_MMAP_THRESHOLD     -3      256*1024   any   (or 0 if no MMAP support)
772*53ee8cc1Swenshuai.xi */
773*53ee8cc1Swenshuai.xi int dlmallopt(int, int);
774*53ee8cc1Swenshuai.xi 
775*53ee8cc1Swenshuai.xi /*
776*53ee8cc1Swenshuai.xi   malloc_footprint();
777*53ee8cc1Swenshuai.xi   Returns the number of bytes obtained from the system.  The total
778*53ee8cc1Swenshuai.xi   number of bytes allocated by malloc, realloc etc., is less than this
779*53ee8cc1Swenshuai.xi   value. Unlike mallinfo, this function returns only a precomputed
780*53ee8cc1Swenshuai.xi   result, so can be called frequently to monitor memory consumption.
781*53ee8cc1Swenshuai.xi   Even if locks are otherwise defined, this function does not use them,
782*53ee8cc1Swenshuai.xi   so results might not be up to date.
783*53ee8cc1Swenshuai.xi */
784*53ee8cc1Swenshuai.xi size_t dlmalloc_footprint(void);
785*53ee8cc1Swenshuai.xi 
786*53ee8cc1Swenshuai.xi /*
787*53ee8cc1Swenshuai.xi   malloc_max_footprint();
788*53ee8cc1Swenshuai.xi   Returns the maximum number of bytes obtained from the system. This
789*53ee8cc1Swenshuai.xi   value will be greater than current footprint if deallocated space
790*53ee8cc1Swenshuai.xi   has been reclaimed by the system. The peak number of bytes allocated
791*53ee8cc1Swenshuai.xi   by malloc, realloc etc., is less than this value. Unlike mallinfo,
792*53ee8cc1Swenshuai.xi   this function returns only a precomputed result, so can be called
793*53ee8cc1Swenshuai.xi   frequently to monitor memory consumption.  Even if locks are
794*53ee8cc1Swenshuai.xi   otherwise defined, this function does not use them, so results might
795*53ee8cc1Swenshuai.xi   not be up to date.
796*53ee8cc1Swenshuai.xi */
797*53ee8cc1Swenshuai.xi size_t dlmalloc_max_footprint(void);
798*53ee8cc1Swenshuai.xi 
799*53ee8cc1Swenshuai.xi #if !NO_MALLINFO
800*53ee8cc1Swenshuai.xi /*
801*53ee8cc1Swenshuai.xi   mallinfo()
802*53ee8cc1Swenshuai.xi   Returns (by copy) a struct containing various summary statistics:
803*53ee8cc1Swenshuai.xi 
804*53ee8cc1Swenshuai.xi   arena:     current total non-mmapped bytes allocated from system
805*53ee8cc1Swenshuai.xi   ordblks:   the number of free chunks
806*53ee8cc1Swenshuai.xi   smblks:    always zero.
807*53ee8cc1Swenshuai.xi   hblks:     current number of mmapped regions
808*53ee8cc1Swenshuai.xi   hblkhd:    total bytes held in mmapped regions
809*53ee8cc1Swenshuai.xi   usmblks:   the maximum total allocated space. This will be greater
810*53ee8cc1Swenshuai.xi                 than current total if trimming has occurred.
811*53ee8cc1Swenshuai.xi   fsmblks:   always zero
812*53ee8cc1Swenshuai.xi   uordblks:  current total allocated space (normal or mmapped)
813*53ee8cc1Swenshuai.xi   fordblks:  total free space
814*53ee8cc1Swenshuai.xi   keepcost:  the maximum number of bytes that could ideally be released
815*53ee8cc1Swenshuai.xi                back to system via malloc_trim. ("ideally" means that
816*53ee8cc1Swenshuai.xi                it ignores page restrictions etc.)
817*53ee8cc1Swenshuai.xi 
818*53ee8cc1Swenshuai.xi   Because these fields are ints, but internal bookkeeping may
819*53ee8cc1Swenshuai.xi   be kept as longs, the reported values may wrap around zero and
820*53ee8cc1Swenshuai.xi   thus be inaccurate.
821*53ee8cc1Swenshuai.xi */
822*53ee8cc1Swenshuai.xi struct mallinfo dlmallinfo(void);
823*53ee8cc1Swenshuai.xi #endif /* NO_MALLINFO */
824*53ee8cc1Swenshuai.xi 
825*53ee8cc1Swenshuai.xi /*
826*53ee8cc1Swenshuai.xi   independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
827*53ee8cc1Swenshuai.xi 
828*53ee8cc1Swenshuai.xi   independent_calloc is similar to calloc, but instead of returning a
829*53ee8cc1Swenshuai.xi   single cleared space, it returns an array of pointers to n_elements
830*53ee8cc1Swenshuai.xi   independent elements that can hold contents of size elem_size, each
831*53ee8cc1Swenshuai.xi   of which starts out cleared, and can be independently freed,
832*53ee8cc1Swenshuai.xi   realloc'ed etc. The elements are guaranteed to be adjacently
833*53ee8cc1Swenshuai.xi   allocated (this is not guaranteed to occur with multiple callocs or
834*53ee8cc1Swenshuai.xi   mallocs), which may also improve cache locality in some
835*53ee8cc1Swenshuai.xi   applications.
836*53ee8cc1Swenshuai.xi 
837*53ee8cc1Swenshuai.xi   The "chunks" argument is optional (i.e., may be null, which is
838*53ee8cc1Swenshuai.xi   probably the most typical usage). If it is null, the returned array
839*53ee8cc1Swenshuai.xi   is itself dynamically allocated and should also be freed when it is
840*53ee8cc1Swenshuai.xi   no longer needed. Otherwise, the chunks array must be of at least
841*53ee8cc1Swenshuai.xi   n_elements in length. It is filled in with the pointers to the
842*53ee8cc1Swenshuai.xi   chunks.
843*53ee8cc1Swenshuai.xi 
844*53ee8cc1Swenshuai.xi   In either case, independent_calloc returns this pointer array, or
845*53ee8cc1Swenshuai.xi   null if the allocation failed.  If n_elements is zero and "chunks"
846*53ee8cc1Swenshuai.xi   is null, it returns a chunk representing an array with zero elements
847*53ee8cc1Swenshuai.xi   (which should be freed if not wanted).
848*53ee8cc1Swenshuai.xi 
849*53ee8cc1Swenshuai.xi   Each element must be individually freed when it is no longer
850*53ee8cc1Swenshuai.xi   needed. If you'd like to instead be able to free all at once, you
851*53ee8cc1Swenshuai.xi   should instead use regular calloc and assign pointers into this
852*53ee8cc1Swenshuai.xi   space to represent elements.  (In this case though, you cannot
853*53ee8cc1Swenshuai.xi   independently free elements.)
854*53ee8cc1Swenshuai.xi 
855*53ee8cc1Swenshuai.xi   independent_calloc simplifies and speeds up implementations of many
856*53ee8cc1Swenshuai.xi   kinds of pools.  It may also be useful when constructing large data
857*53ee8cc1Swenshuai.xi   structures that initially have a fixed number of fixed-sized nodes,
858*53ee8cc1Swenshuai.xi   but the number is not known at compile time, and some of the nodes
859*53ee8cc1Swenshuai.xi   may later need to be freed. For example:
860*53ee8cc1Swenshuai.xi 
861*53ee8cc1Swenshuai.xi   struct Node { int item; struct Node* next; };
862*53ee8cc1Swenshuai.xi 
863*53ee8cc1Swenshuai.xi   struct Node* build_list() {
864*53ee8cc1Swenshuai.xi     struct Node** pool;
865*53ee8cc1Swenshuai.xi     int n = read_number_of_nodes_needed();
866*53ee8cc1Swenshuai.xi     if (n <= 0) return 0;
867*53ee8cc1Swenshuai.xi     pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
868*53ee8cc1Swenshuai.xi     if (pool == 0) die();
869*53ee8cc1Swenshuai.xi     // organize into a linked list...
870*53ee8cc1Swenshuai.xi     struct Node* first = pool[0];
871*53ee8cc1Swenshuai.xi     for (i = 0; i < n-1; ++i)
872*53ee8cc1Swenshuai.xi       pool[i]->next = pool[i+1];
873*53ee8cc1Swenshuai.xi     free(pool);     // Can now free the array (or not, if it is needed later)
874*53ee8cc1Swenshuai.xi     return first;
875*53ee8cc1Swenshuai.xi   }
876*53ee8cc1Swenshuai.xi */
877*53ee8cc1Swenshuai.xi void** dlindependent_calloc(size_t, size_t, void**);
878*53ee8cc1Swenshuai.xi 
879*53ee8cc1Swenshuai.xi /*
880*53ee8cc1Swenshuai.xi   independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
881*53ee8cc1Swenshuai.xi 
882*53ee8cc1Swenshuai.xi   independent_comalloc allocates, all at once, a set of n_elements
883*53ee8cc1Swenshuai.xi   chunks with sizes indicated in the "sizes" array.    It returns
884*53ee8cc1Swenshuai.xi   an array of pointers to these elements, each of which can be
885*53ee8cc1Swenshuai.xi   independently freed, realloc'ed etc. The elements are guaranteed to
886*53ee8cc1Swenshuai.xi   be adjacently allocated (this is not guaranteed to occur with
887*53ee8cc1Swenshuai.xi   multiple callocs or mallocs), which may also improve cache locality
888*53ee8cc1Swenshuai.xi   in some applications.
889*53ee8cc1Swenshuai.xi 
890*53ee8cc1Swenshuai.xi   The "chunks" argument is optional (i.e., may be null). If it is null
891*53ee8cc1Swenshuai.xi   the returned array is itself dynamically allocated and should also
892*53ee8cc1Swenshuai.xi   be freed when it is no longer needed. Otherwise, the chunks array
893*53ee8cc1Swenshuai.xi   must be of at least n_elements in length. It is filled in with the
894*53ee8cc1Swenshuai.xi   pointers to the chunks.
895*53ee8cc1Swenshuai.xi 
896*53ee8cc1Swenshuai.xi   In either case, independent_comalloc returns this pointer array, or
897*53ee8cc1Swenshuai.xi   null if the allocation failed.  If n_elements is zero and chunks is
898*53ee8cc1Swenshuai.xi   null, it returns a chunk representing an array with zero elements
899*53ee8cc1Swenshuai.xi   (which should be freed if not wanted).
900*53ee8cc1Swenshuai.xi 
901*53ee8cc1Swenshuai.xi   Each element must be individually freed when it is no longer
902*53ee8cc1Swenshuai.xi   needed. If you'd like to instead be able to free all at once, you
903*53ee8cc1Swenshuai.xi   should instead use a single regular malloc, and assign pointers at
904*53ee8cc1Swenshuai.xi   particular offsets in the aggregate space. (In this case though, you
905*53ee8cc1Swenshuai.xi   cannot independently free elements.)
906*53ee8cc1Swenshuai.xi 
907*53ee8cc1Swenshuai.xi   independent_comallac differs from independent_calloc in that each
908*53ee8cc1Swenshuai.xi   element may have a different size, and also that it does not
909*53ee8cc1Swenshuai.xi   automatically clear elements.
910*53ee8cc1Swenshuai.xi 
911*53ee8cc1Swenshuai.xi   independent_comalloc can be used to speed up allocation in cases
912*53ee8cc1Swenshuai.xi   where several structs or objects must always be allocated at the
913*53ee8cc1Swenshuai.xi   same time.  For example:
914*53ee8cc1Swenshuai.xi 
915*53ee8cc1Swenshuai.xi   struct Head { ... }
916*53ee8cc1Swenshuai.xi   struct Foot { ... }
917*53ee8cc1Swenshuai.xi 
918*53ee8cc1Swenshuai.xi   void send_message(char* msg) {
919*53ee8cc1Swenshuai.xi     int msglen = strlen(msg);
920*53ee8cc1Swenshuai.xi     size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
921*53ee8cc1Swenshuai.xi     void* chunks[3];
922*53ee8cc1Swenshuai.xi     if (independent_comalloc(3, sizes, chunks) == 0)
923*53ee8cc1Swenshuai.xi       die();
924*53ee8cc1Swenshuai.xi     struct Head* head = (struct Head*)(chunks[0]);
925*53ee8cc1Swenshuai.xi     char*        body = (char*)(chunks[1]);
926*53ee8cc1Swenshuai.xi     struct Foot* foot = (struct Foot*)(chunks[2]);
927*53ee8cc1Swenshuai.xi     // ...
928*53ee8cc1Swenshuai.xi   }
929*53ee8cc1Swenshuai.xi 
930*53ee8cc1Swenshuai.xi   In general though, independent_comalloc is worth using only for
931*53ee8cc1Swenshuai.xi   larger values of n_elements. For small values, you probably won't
932*53ee8cc1Swenshuai.xi   detect enough difference from series of malloc calls to bother.
933*53ee8cc1Swenshuai.xi 
934*53ee8cc1Swenshuai.xi   Overuse of independent_comalloc can increase overall memory usage,
935*53ee8cc1Swenshuai.xi   since it cannot reuse existing noncontiguous small chunks that
936*53ee8cc1Swenshuai.xi   might be available for some of the elements.
937*53ee8cc1Swenshuai.xi */
938*53ee8cc1Swenshuai.xi void** dlindependent_comalloc(size_t, size_t*, void**);
939*53ee8cc1Swenshuai.xi 
940*53ee8cc1Swenshuai.xi 
941*53ee8cc1Swenshuai.xi /*
942*53ee8cc1Swenshuai.xi   pvalloc(size_t n);
943*53ee8cc1Swenshuai.xi   Equivalent to valloc(minimum-page-that-holds(n)), that is,
944*53ee8cc1Swenshuai.xi   round up n to nearest pagesize.
945*53ee8cc1Swenshuai.xi  */
946*53ee8cc1Swenshuai.xi void*  dlpvalloc(size_t);
947*53ee8cc1Swenshuai.xi 
948*53ee8cc1Swenshuai.xi /*
949*53ee8cc1Swenshuai.xi   malloc_trim(size_t pad);
950*53ee8cc1Swenshuai.xi 
951*53ee8cc1Swenshuai.xi   If possible, gives memory back to the system (via negative arguments
952*53ee8cc1Swenshuai.xi   to sbrk) if there is unused memory at the `high' end of the malloc
953*53ee8cc1Swenshuai.xi   pool or in unused MMAP segments. You can call this after freeing
954*53ee8cc1Swenshuai.xi   large blocks of memory to potentially reduce the system-level memory
955*53ee8cc1Swenshuai.xi   requirements of a program. However, it cannot guarantee to reduce
956*53ee8cc1Swenshuai.xi   memory. Under some allocation patterns, some large free blocks of
957*53ee8cc1Swenshuai.xi   memory will be locked between two used chunks, so they cannot be
958*53ee8cc1Swenshuai.xi   given back to the system.
959*53ee8cc1Swenshuai.xi 
960*53ee8cc1Swenshuai.xi   The `pad' argument to malloc_trim represents the amount of free
961*53ee8cc1Swenshuai.xi   trailing space to leave untrimmed. If this argument is zero, only
962*53ee8cc1Swenshuai.xi   the minimum amount of memory to maintain internal data structures
963*53ee8cc1Swenshuai.xi   will be left. Non-zero arguments can be supplied to maintain enough
964*53ee8cc1Swenshuai.xi   trailing space to service future expected allocations without having
965*53ee8cc1Swenshuai.xi   to re-obtain memory from the system.
966*53ee8cc1Swenshuai.xi 
967*53ee8cc1Swenshuai.xi   Malloc_trim returns 1 if it actually released any memory, else 0.
968*53ee8cc1Swenshuai.xi */
969*53ee8cc1Swenshuai.xi int  dlmalloc_trim(size_t);
970*53ee8cc1Swenshuai.xi 
971*53ee8cc1Swenshuai.xi /*
972*53ee8cc1Swenshuai.xi   malloc_usable_size(void* p);
973*53ee8cc1Swenshuai.xi 
974*53ee8cc1Swenshuai.xi   Returns the number of bytes you can actually use in
975*53ee8cc1Swenshuai.xi   an allocated chunk, which may be more than you requested (although
976*53ee8cc1Swenshuai.xi   often not) due to alignment and minimum size constraints.
977*53ee8cc1Swenshuai.xi   You can use this many bytes without worrying about
978*53ee8cc1Swenshuai.xi   overwriting other allocated objects. This is not a particularly great
979*53ee8cc1Swenshuai.xi   programming practice. malloc_usable_size can be more useful in
980*53ee8cc1Swenshuai.xi   debugging and assertions, for example:
981*53ee8cc1Swenshuai.xi 
982*53ee8cc1Swenshuai.xi   p = malloc(n);
983*53ee8cc1Swenshuai.xi   assert(malloc_usable_size(p) >= 256);
984*53ee8cc1Swenshuai.xi */
985*53ee8cc1Swenshuai.xi size_t dlmalloc_usable_size(void*);
986*53ee8cc1Swenshuai.xi 
987*53ee8cc1Swenshuai.xi /*
988*53ee8cc1Swenshuai.xi   malloc_stats();
989*53ee8cc1Swenshuai.xi   Prints on stderr the amount of space obtained from the system (both
990*53ee8cc1Swenshuai.xi   via sbrk and mmap), the maximum amount (which may be more than
991*53ee8cc1Swenshuai.xi   current if malloc_trim and/or munmap got called), and the current
992*53ee8cc1Swenshuai.xi   number of bytes allocated via malloc (or realloc, etc) but not yet
993*53ee8cc1Swenshuai.xi   freed. Note that this is the number of bytes allocated, not the
994*53ee8cc1Swenshuai.xi   number requested. It will be larger than the number requested
995*53ee8cc1Swenshuai.xi   because of alignment and bookkeeping overhead. Because it includes
996*53ee8cc1Swenshuai.xi   alignment wastage as being in use, this figure may be greater than
997*53ee8cc1Swenshuai.xi   zero even when no user-level chunks are allocated.
998*53ee8cc1Swenshuai.xi 
999*53ee8cc1Swenshuai.xi   The reported current and maximum system memory can be inaccurate if
1000*53ee8cc1Swenshuai.xi   a program makes other calls to system memory allocation functions
1001*53ee8cc1Swenshuai.xi   (normally sbrk) outside of malloc.
1002*53ee8cc1Swenshuai.xi 
1003*53ee8cc1Swenshuai.xi   malloc_stats prints only the most commonly interesting statistics.
1004*53ee8cc1Swenshuai.xi   More information can be obtained by calling mallinfo.
1005*53ee8cc1Swenshuai.xi */
1006*53ee8cc1Swenshuai.xi void  dlmalloc_stats(void);
1007*53ee8cc1Swenshuai.xi 
1008*53ee8cc1Swenshuai.xi #endif /* ONLY_MSPACES */
1009*53ee8cc1Swenshuai.xi 
1010*53ee8cc1Swenshuai.xi #if MSPACES
1011*53ee8cc1Swenshuai.xi 
1012*53ee8cc1Swenshuai.xi /*
1013*53ee8cc1Swenshuai.xi   mspace is an opaque type representing an independent
1014*53ee8cc1Swenshuai.xi   region of space that supports mspace_malloc, etc.
1015*53ee8cc1Swenshuai.xi */
1016*53ee8cc1Swenshuai.xi typedef void* mspace;
1017*53ee8cc1Swenshuai.xi 
1018*53ee8cc1Swenshuai.xi /*
1019*53ee8cc1Swenshuai.xi   create_mspace creates and returns a new independent space with the
1020*53ee8cc1Swenshuai.xi   given initial capacity, or, if 0, the default granularity size.  It
1021*53ee8cc1Swenshuai.xi   returns null if there is no system memory available to create the
1022*53ee8cc1Swenshuai.xi   space.  If argument locked is non-zero, the space uses a separate
1023*53ee8cc1Swenshuai.xi   lock to control access. The capacity of the space will grow
1024*53ee8cc1Swenshuai.xi   dynamically as needed to service mspace_malloc requests.  You can
1025*53ee8cc1Swenshuai.xi   control the sizes of incremental increases of this space by
1026*53ee8cc1Swenshuai.xi   compiling with a different DEFAULT_GRANULARITY or dynamically
1027*53ee8cc1Swenshuai.xi   setting with mallopt(M_GRANULARITY, value).
1028*53ee8cc1Swenshuai.xi */
1029*53ee8cc1Swenshuai.xi mspace create_mspace(size_t capacity, int locked);
1030*53ee8cc1Swenshuai.xi 
1031*53ee8cc1Swenshuai.xi /*
1032*53ee8cc1Swenshuai.xi   destroy_mspace destroys the given space, and attempts to return all
1033*53ee8cc1Swenshuai.xi   of its memory back to the system, returning the total number of
1034*53ee8cc1Swenshuai.xi   bytes freed. After destruction, the results of access to all memory
1035*53ee8cc1Swenshuai.xi   used by the space become undefined.
1036*53ee8cc1Swenshuai.xi */
1037*53ee8cc1Swenshuai.xi size_t destroy_mspace(mspace msp);
1038*53ee8cc1Swenshuai.xi 
1039*53ee8cc1Swenshuai.xi /*
1040*53ee8cc1Swenshuai.xi   create_mspace_with_base uses the memory supplied as the initial base
1041*53ee8cc1Swenshuai.xi   of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1042*53ee8cc1Swenshuai.xi   space is used for bookkeeping, so the capacity must be at least this
1043*53ee8cc1Swenshuai.xi   large. (Otherwise 0 is returned.) When this initial space is
1044*53ee8cc1Swenshuai.xi   exhausted, additional memory will be obtained from the system.
1045*53ee8cc1Swenshuai.xi   Destroying this space will deallocate all additionally allocated
1046*53ee8cc1Swenshuai.xi   space (if possible) but not the initial base.
1047*53ee8cc1Swenshuai.xi */
1048*53ee8cc1Swenshuai.xi mspace create_mspace_with_base(void* base, size_t capacity, int locked);
1049*53ee8cc1Swenshuai.xi 
1050*53ee8cc1Swenshuai.xi /*
1051*53ee8cc1Swenshuai.xi   mspace_malloc behaves as malloc, but operates within
1052*53ee8cc1Swenshuai.xi   the given space.
1053*53ee8cc1Swenshuai.xi */
1054*53ee8cc1Swenshuai.xi void* mspace_malloc(mspace msp, size_t bytes);
1055*53ee8cc1Swenshuai.xi 
1056*53ee8cc1Swenshuai.xi /*
1057*53ee8cc1Swenshuai.xi   mspace_free behaves as free, but operates within
1058*53ee8cc1Swenshuai.xi   the given space.
1059*53ee8cc1Swenshuai.xi 
1060*53ee8cc1Swenshuai.xi   If compiled with FOOTERS==1, mspace_free is not actually needed.
1061*53ee8cc1Swenshuai.xi   free may be called instead of mspace_free because freed chunks from
1062*53ee8cc1Swenshuai.xi   any space are handled by their originating spaces.
1063*53ee8cc1Swenshuai.xi */
1064*53ee8cc1Swenshuai.xi void mspace_free(mspace msp, void* mem);
1065*53ee8cc1Swenshuai.xi 
1066*53ee8cc1Swenshuai.xi /*
1067*53ee8cc1Swenshuai.xi   mspace_realloc behaves as realloc, but operates within
1068*53ee8cc1Swenshuai.xi   the given space.
1069*53ee8cc1Swenshuai.xi 
1070*53ee8cc1Swenshuai.xi   If compiled with FOOTERS==1, mspace_realloc is not actually
1071*53ee8cc1Swenshuai.xi   needed.  realloc may be called instead of mspace_realloc because
1072*53ee8cc1Swenshuai.xi   realloced chunks from any space are handled by their originating
1073*53ee8cc1Swenshuai.xi   spaces.
1074*53ee8cc1Swenshuai.xi */
1075*53ee8cc1Swenshuai.xi void* mspace_realloc(mspace msp, void* mem, size_t newsize);
1076*53ee8cc1Swenshuai.xi 
1077*53ee8cc1Swenshuai.xi /*
1078*53ee8cc1Swenshuai.xi   mspace_calloc behaves as calloc, but operates within
1079*53ee8cc1Swenshuai.xi   the given space.
1080*53ee8cc1Swenshuai.xi */
1081*53ee8cc1Swenshuai.xi void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
1082*53ee8cc1Swenshuai.xi 
1083*53ee8cc1Swenshuai.xi /*
1084*53ee8cc1Swenshuai.xi   mspace_memalign behaves as memalign, but operates within
1085*53ee8cc1Swenshuai.xi   the given space.
1086*53ee8cc1Swenshuai.xi */
1087*53ee8cc1Swenshuai.xi void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
1088*53ee8cc1Swenshuai.xi 
1089*53ee8cc1Swenshuai.xi /*
1090*53ee8cc1Swenshuai.xi   mspace_independent_calloc behaves as independent_calloc, but
1091*53ee8cc1Swenshuai.xi   operates within the given space.
1092*53ee8cc1Swenshuai.xi */
1093*53ee8cc1Swenshuai.xi void** mspace_independent_calloc(mspace msp, size_t n_elements,
1094*53ee8cc1Swenshuai.xi                                  size_t elem_size, void* chunks[]);
1095*53ee8cc1Swenshuai.xi 
1096*53ee8cc1Swenshuai.xi /*
1097*53ee8cc1Swenshuai.xi   mspace_independent_comalloc behaves as independent_comalloc, but
1098*53ee8cc1Swenshuai.xi   operates within the given space.
1099*53ee8cc1Swenshuai.xi */
1100*53ee8cc1Swenshuai.xi void** mspace_independent_comalloc(mspace msp, size_t n_elements,
1101*53ee8cc1Swenshuai.xi                                    size_t sizes[], void* chunks[]);
1102*53ee8cc1Swenshuai.xi 
1103*53ee8cc1Swenshuai.xi /*
1104*53ee8cc1Swenshuai.xi   mspace_footprint() returns the number of bytes obtained from the
1105*53ee8cc1Swenshuai.xi   system for this space.
1106*53ee8cc1Swenshuai.xi */
1107*53ee8cc1Swenshuai.xi size_t mspace_footprint(mspace msp);
1108*53ee8cc1Swenshuai.xi 
1109*53ee8cc1Swenshuai.xi /*
1110*53ee8cc1Swenshuai.xi   mspace_max_footprint() returns the peak number of bytes obtained from the
1111*53ee8cc1Swenshuai.xi   system for this space.
1112*53ee8cc1Swenshuai.xi */
1113*53ee8cc1Swenshuai.xi size_t mspace_max_footprint(mspace msp);
1114*53ee8cc1Swenshuai.xi 
1115*53ee8cc1Swenshuai.xi 
1116*53ee8cc1Swenshuai.xi #if !NO_MALLINFO
1117*53ee8cc1Swenshuai.xi /*
1118*53ee8cc1Swenshuai.xi   mspace_mallinfo behaves as mallinfo, but reports properties of
1119*53ee8cc1Swenshuai.xi   the given space.
1120*53ee8cc1Swenshuai.xi */
1121*53ee8cc1Swenshuai.xi struct mallinfo mspace_mallinfo(mspace msp);
1122*53ee8cc1Swenshuai.xi #endif /* NO_MALLINFO */
1123*53ee8cc1Swenshuai.xi 
1124*53ee8cc1Swenshuai.xi /*
1125*53ee8cc1Swenshuai.xi   mspace_malloc_stats behaves as malloc_stats, but reports
1126*53ee8cc1Swenshuai.xi   properties of the given space.
1127*53ee8cc1Swenshuai.xi */
1128*53ee8cc1Swenshuai.xi void mspace_malloc_stats(mspace msp);
1129*53ee8cc1Swenshuai.xi 
1130*53ee8cc1Swenshuai.xi /*
1131*53ee8cc1Swenshuai.xi   mspace_trim behaves as malloc_trim, but
1132*53ee8cc1Swenshuai.xi   operates within the given space.
1133*53ee8cc1Swenshuai.xi */
1134*53ee8cc1Swenshuai.xi int mspace_trim(mspace msp, size_t pad);
1135*53ee8cc1Swenshuai.xi 
1136*53ee8cc1Swenshuai.xi /*
1137*53ee8cc1Swenshuai.xi   An alias for mallopt.
1138*53ee8cc1Swenshuai.xi */
1139*53ee8cc1Swenshuai.xi int mspace_mallopt(int, int);
1140*53ee8cc1Swenshuai.xi 
1141*53ee8cc1Swenshuai.xi #endif /* MSPACES */
1142*53ee8cc1Swenshuai.xi 
1143*53ee8cc1Swenshuai.xi #ifdef __cplusplus
1144*53ee8cc1Swenshuai.xi };  /* end of extern "C" */
1145*53ee8cc1Swenshuai.xi #endif /* __cplusplus */
1146*53ee8cc1Swenshuai.xi 
1147*53ee8cc1Swenshuai.xi /*
1148*53ee8cc1Swenshuai.xi   ========================================================================
1149*53ee8cc1Swenshuai.xi   To make a fully customizable malloc.h header file, cut everything
1150*53ee8cc1Swenshuai.xi   above this line, put into file malloc.h, edit to suit, and #include it
1151*53ee8cc1Swenshuai.xi   on the next line, as well as in programs that use this malloc.
1152*53ee8cc1Swenshuai.xi   ========================================================================
1153*53ee8cc1Swenshuai.xi */
1154*53ee8cc1Swenshuai.xi 
1155*53ee8cc1Swenshuai.xi /* #include "malloc.h" */
1156*53ee8cc1Swenshuai.xi 
1157*53ee8cc1Swenshuai.xi /*------------------------------ internal #includes ---------------------- */
1158*53ee8cc1Swenshuai.xi 
1159*53ee8cc1Swenshuai.xi #ifdef WIN32
1160*53ee8cc1Swenshuai.xi #pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1161*53ee8cc1Swenshuai.xi #endif /* WIN32 */
1162*53ee8cc1Swenshuai.xi 
1163*53ee8cc1Swenshuai.xi #include <stdio.h>       /* for printing in malloc_stats */
1164*53ee8cc1Swenshuai.xi 
1165*53ee8cc1Swenshuai.xi #ifndef LACKS_ERRNO_H
1166*53ee8cc1Swenshuai.xi #include <errno.h>       /* for MALLOC_FAILURE_ACTION */
1167*53ee8cc1Swenshuai.xi #endif /* LACKS_ERRNO_H */
1168*53ee8cc1Swenshuai.xi #if FOOTERS
1169*53ee8cc1Swenshuai.xi #include <time.h>        /* for magic initialization */
1170*53ee8cc1Swenshuai.xi #endif /* FOOTERS */
1171*53ee8cc1Swenshuai.xi #ifndef LACKS_STDLIB_H
1172*53ee8cc1Swenshuai.xi //#include <stdlib.h>      /* for abort() */
1173*53ee8cc1Swenshuai.xi void abort(void);
1174*53ee8cc1Swenshuai.xi #endif /* LACKS_STDLIB_H */
1175*53ee8cc1Swenshuai.xi #ifdef DEBUG
1176*53ee8cc1Swenshuai.xi #if ABORT_ON_ASSERT_FAILURE
1177*53ee8cc1Swenshuai.xi #define assert(x) if(!(x)) ABORT
1178*53ee8cc1Swenshuai.xi #else /* ABORT_ON_ASSERT_FAILURE */
1179*53ee8cc1Swenshuai.xi #include <assert.h>
1180*53ee8cc1Swenshuai.xi #endif /* ABORT_ON_ASSERT_FAILURE */
1181*53ee8cc1Swenshuai.xi #else  /* DEBUG */
1182*53ee8cc1Swenshuai.xi #define assert(x)
1183*53ee8cc1Swenshuai.xi #endif /* DEBUG */
1184*53ee8cc1Swenshuai.xi #ifndef LACKS_STRING_H
1185*53ee8cc1Swenshuai.xi #include <string.h>      /* for memset etc */
1186*53ee8cc1Swenshuai.xi #endif  /* LACKS_STRING_H */
1187*53ee8cc1Swenshuai.xi #if USE_BUILTIN_FFS
1188*53ee8cc1Swenshuai.xi #ifndef LACKS_STRINGS_H
1189*53ee8cc1Swenshuai.xi #include <strings.h>     /* for ffs */
1190*53ee8cc1Swenshuai.xi #endif /* LACKS_STRINGS_H */
1191*53ee8cc1Swenshuai.xi #endif /* USE_BUILTIN_FFS */
1192*53ee8cc1Swenshuai.xi #if HAVE_MMAP
1193*53ee8cc1Swenshuai.xi #ifndef LACKS_SYS_MMAN_H
1194*53ee8cc1Swenshuai.xi #include <sys/mman.h>    /* for mmap */
1195*53ee8cc1Swenshuai.xi #endif /* LACKS_SYS_MMAN_H */
1196*53ee8cc1Swenshuai.xi #ifndef LACKS_FCNTL_H
1197*53ee8cc1Swenshuai.xi #include <fcntl.h>
1198*53ee8cc1Swenshuai.xi #endif /* LACKS_FCNTL_H */
1199*53ee8cc1Swenshuai.xi #endif /* HAVE_MMAP */
1200*53ee8cc1Swenshuai.xi #if HAVE_MORECORE
1201*53ee8cc1Swenshuai.xi #ifndef LACKS_UNISTD_H
1202*53ee8cc1Swenshuai.xi #include <unistd.h>     /* for sbrk */
1203*53ee8cc1Swenshuai.xi #else /* LACKS_UNISTD_H */
1204*53ee8cc1Swenshuai.xi #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1205*53ee8cc1Swenshuai.xi extern void*     sbrk(ptrdiff_t);
1206*53ee8cc1Swenshuai.xi #endif /* FreeBSD etc */
1207*53ee8cc1Swenshuai.xi #endif /* LACKS_UNISTD_H */
1208*53ee8cc1Swenshuai.xi #endif /* HAVE_MMAP */
1209*53ee8cc1Swenshuai.xi 
1210*53ee8cc1Swenshuai.xi #ifndef WIN32
1211*53ee8cc1Swenshuai.xi #ifndef malloc_getpagesize
1212*53ee8cc1Swenshuai.xi #  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
1213*53ee8cc1Swenshuai.xi #    ifndef _SC_PAGE_SIZE
1214*53ee8cc1Swenshuai.xi #      define _SC_PAGE_SIZE _SC_PAGESIZE
1215*53ee8cc1Swenshuai.xi #    endif
1216*53ee8cc1Swenshuai.xi #  endif
1217*53ee8cc1Swenshuai.xi #  ifdef _SC_PAGE_SIZE
1218*53ee8cc1Swenshuai.xi #    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1219*53ee8cc1Swenshuai.xi #  else
1220*53ee8cc1Swenshuai.xi #    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1221*53ee8cc1Swenshuai.xi        extern size_t getpagesize();
1222*53ee8cc1Swenshuai.xi #      define malloc_getpagesize getpagesize()
1223*53ee8cc1Swenshuai.xi #    else
1224*53ee8cc1Swenshuai.xi #      ifdef WIN32 /* use supplied emulation of getpagesize */
1225*53ee8cc1Swenshuai.xi #        define malloc_getpagesize getpagesize()
1226*53ee8cc1Swenshuai.xi #      else
1227*53ee8cc1Swenshuai.xi #        ifndef LACKS_SYS_PARAM_H
1228*53ee8cc1Swenshuai.xi #          include <sys/param.h>
1229*53ee8cc1Swenshuai.xi #        endif
1230*53ee8cc1Swenshuai.xi #        ifdef EXEC_PAGESIZE
1231*53ee8cc1Swenshuai.xi #          define malloc_getpagesize EXEC_PAGESIZE
1232*53ee8cc1Swenshuai.xi #        else
1233*53ee8cc1Swenshuai.xi #          ifdef NBPG
1234*53ee8cc1Swenshuai.xi #            ifndef CLSIZE
1235*53ee8cc1Swenshuai.xi #              define malloc_getpagesize NBPG
1236*53ee8cc1Swenshuai.xi #            else
1237*53ee8cc1Swenshuai.xi #              define malloc_getpagesize (NBPG * CLSIZE)
1238*53ee8cc1Swenshuai.xi #            endif
1239*53ee8cc1Swenshuai.xi #          else
1240*53ee8cc1Swenshuai.xi #            ifdef NBPC
1241*53ee8cc1Swenshuai.xi #              define malloc_getpagesize NBPC
1242*53ee8cc1Swenshuai.xi #            else
1243*53ee8cc1Swenshuai.xi #              ifdef PAGESIZE
1244*53ee8cc1Swenshuai.xi #                define malloc_getpagesize PAGESIZE
1245*53ee8cc1Swenshuai.xi #              else /* just guess */
1246*53ee8cc1Swenshuai.xi #                define malloc_getpagesize ((size_t)4096U)
1247*53ee8cc1Swenshuai.xi #              endif
1248*53ee8cc1Swenshuai.xi #            endif
1249*53ee8cc1Swenshuai.xi #          endif
1250*53ee8cc1Swenshuai.xi #        endif
1251*53ee8cc1Swenshuai.xi #      endif
1252*53ee8cc1Swenshuai.xi #    endif
1253*53ee8cc1Swenshuai.xi #  endif
1254*53ee8cc1Swenshuai.xi #endif
1255*53ee8cc1Swenshuai.xi #endif
1256*53ee8cc1Swenshuai.xi 
1257*53ee8cc1Swenshuai.xi /* ------------------- size_t and alignment properties -------------------- */
1258*53ee8cc1Swenshuai.xi 
1259*53ee8cc1Swenshuai.xi #if 1
1260*53ee8cc1Swenshuai.xi #include "dlmalloc.h"
1261*53ee8cc1Swenshuai.xi #else
1262*53ee8cc1Swenshuai.xi // moved out to dlmalloc.h
1263*53ee8cc1Swenshuai.xi /* The byte and bit size of a size_t */
1264*53ee8cc1Swenshuai.xi #define SIZE_T_SIZE         (sizeof(size_t))
1265*53ee8cc1Swenshuai.xi #define SIZE_T_BITSIZE      (sizeof(size_t) << 3)
1266*53ee8cc1Swenshuai.xi 
1267*53ee8cc1Swenshuai.xi /* Some constants coerced to size_t */
1268*53ee8cc1Swenshuai.xi /* Annoying but necessary to avoid errors on some plaftorms */
1269*53ee8cc1Swenshuai.xi #define SIZE_T_ZERO         ((size_t)0)
1270*53ee8cc1Swenshuai.xi #define SIZE_T_ONE          ((size_t)1)
1271*53ee8cc1Swenshuai.xi #define SIZE_T_TWO          ((size_t)2)
1272*53ee8cc1Swenshuai.xi #define TWO_SIZE_T_SIZES    (SIZE_T_SIZE<<1)
1273*53ee8cc1Swenshuai.xi #define FOUR_SIZE_T_SIZES   (SIZE_T_SIZE<<2)
1274*53ee8cc1Swenshuai.xi #define SIX_SIZE_T_SIZES    (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1275*53ee8cc1Swenshuai.xi #define HALF_MAX_SIZE_T     (MAX_SIZE_T / 2U)
1276*53ee8cc1Swenshuai.xi 
1277*53ee8cc1Swenshuai.xi /* The bit mask value corresponding to MALLOC_ALIGNMENT */
1278*53ee8cc1Swenshuai.xi #define CHUNK_ALIGN_MASK    (MALLOC_ALIGNMENT - SIZE_T_ONE)
1279*53ee8cc1Swenshuai.xi 
1280*53ee8cc1Swenshuai.xi /* True if address a has acceptable alignment */
1281*53ee8cc1Swenshuai.xi #define is_aligned(A)       (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1282*53ee8cc1Swenshuai.xi 
1283*53ee8cc1Swenshuai.xi /* the number of bytes to offset an address to align it */
1284*53ee8cc1Swenshuai.xi #define align_offset(A)\
1285*53ee8cc1Swenshuai.xi  ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1286*53ee8cc1Swenshuai.xi   ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1287*53ee8cc1Swenshuai.xi 
1288*53ee8cc1Swenshuai.xi #endif  // moved out to dlmalloc.h
1289*53ee8cc1Swenshuai.xi 
1290*53ee8cc1Swenshuai.xi /* -------------------------- MMAP preliminaries ------------------------- */
1291*53ee8cc1Swenshuai.xi 
1292*53ee8cc1Swenshuai.xi /*
1293*53ee8cc1Swenshuai.xi    If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1294*53ee8cc1Swenshuai.xi    checks to fail so compiler optimizer can delete code rather than
1295*53ee8cc1Swenshuai.xi    using so many "#if"s.
1296*53ee8cc1Swenshuai.xi */
1297*53ee8cc1Swenshuai.xi 
1298*53ee8cc1Swenshuai.xi 
1299*53ee8cc1Swenshuai.xi /* MORECORE and MMAP must return MFAIL on failure */
1300*53ee8cc1Swenshuai.xi #define MFAIL                ((void*)(MAX_SIZE_T))
1301*53ee8cc1Swenshuai.xi #define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
1302*53ee8cc1Swenshuai.xi 
1303*53ee8cc1Swenshuai.xi #if !HAVE_MMAP
1304*53ee8cc1Swenshuai.xi #define IS_MMAPPED_BIT       (SIZE_T_ZERO)
1305*53ee8cc1Swenshuai.xi #define USE_MMAP_BIT         (SIZE_T_ZERO)
1306*53ee8cc1Swenshuai.xi #define CALL_MMAP(s)         MFAIL
1307*53ee8cc1Swenshuai.xi #define CALL_MUNMAP(a, s)    (-1)
1308*53ee8cc1Swenshuai.xi #define DIRECT_MMAP(s)       MFAIL
1309*53ee8cc1Swenshuai.xi 
1310*53ee8cc1Swenshuai.xi #else /* HAVE_MMAP */
1311*53ee8cc1Swenshuai.xi #define IS_MMAPPED_BIT       (SIZE_T_ONE)
1312*53ee8cc1Swenshuai.xi #define USE_MMAP_BIT         (SIZE_T_ONE)
1313*53ee8cc1Swenshuai.xi 
1314*53ee8cc1Swenshuai.xi #ifndef WIN32
1315*53ee8cc1Swenshuai.xi #define CALL_MUNMAP(a, s)    munmap((a), (s))
1316*53ee8cc1Swenshuai.xi #define MMAP_PROT            (PROT_READ|PROT_WRITE)
1317*53ee8cc1Swenshuai.xi #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1318*53ee8cc1Swenshuai.xi #define MAP_ANONYMOUS        MAP_ANON
1319*53ee8cc1Swenshuai.xi #endif /* MAP_ANON */
1320*53ee8cc1Swenshuai.xi #ifdef MAP_ANONYMOUS
1321*53ee8cc1Swenshuai.xi #define MMAP_FLAGS           (MAP_PRIVATE|MAP_ANONYMOUS)
1322*53ee8cc1Swenshuai.xi #define CALL_MMAP(s)         mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1323*53ee8cc1Swenshuai.xi #else /* MAP_ANONYMOUS */
1324*53ee8cc1Swenshuai.xi /*
1325*53ee8cc1Swenshuai.xi    Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1326*53ee8cc1Swenshuai.xi    is unlikely to be needed, but is supplied just in case.
1327*53ee8cc1Swenshuai.xi */
1328*53ee8cc1Swenshuai.xi #define MMAP_FLAGS           (MAP_PRIVATE)
1329*53ee8cc1Swenshuai.xi static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1330*53ee8cc1Swenshuai.xi #define CALL_MMAP(s) ((dev_zero_fd < 0) ? \
1331*53ee8cc1Swenshuai.xi            (dev_zero_fd = open("/dev/zero", O_RDWR), \
1332*53ee8cc1Swenshuai.xi             mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1333*53ee8cc1Swenshuai.xi             mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1334*53ee8cc1Swenshuai.xi #endif /* MAP_ANONYMOUS */
1335*53ee8cc1Swenshuai.xi 
1336*53ee8cc1Swenshuai.xi #define DIRECT_MMAP(s)       CALL_MMAP(s)
1337*53ee8cc1Swenshuai.xi #else /* WIN32 */
1338*53ee8cc1Swenshuai.xi 
1339*53ee8cc1Swenshuai.xi /* Win32 MMAP via VirtualAlloc */
win32mmap(size_t size)1340*53ee8cc1Swenshuai.xi static void* win32mmap(size_t size) {
1341*53ee8cc1Swenshuai.xi   void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
1342*53ee8cc1Swenshuai.xi   return (ptr != 0)? ptr: MFAIL;
1343*53ee8cc1Swenshuai.xi }
1344*53ee8cc1Swenshuai.xi 
1345*53ee8cc1Swenshuai.xi /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
win32direct_mmap(size_t size)1346*53ee8cc1Swenshuai.xi static void* win32direct_mmap(size_t size) {
1347*53ee8cc1Swenshuai.xi   void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
1348*53ee8cc1Swenshuai.xi                            PAGE_READWRITE);
1349*53ee8cc1Swenshuai.xi   return (ptr != 0)? ptr: MFAIL;
1350*53ee8cc1Swenshuai.xi }
1351*53ee8cc1Swenshuai.xi 
1352*53ee8cc1Swenshuai.xi /* This function supports releasing coalesed segments */
win32munmap(void * ptr,size_t size)1353*53ee8cc1Swenshuai.xi static int win32munmap(void* ptr, size_t size) {
1354*53ee8cc1Swenshuai.xi   MEMORY_BASIC_INFORMATION minfo;
1355*53ee8cc1Swenshuai.xi   char* cptr = ptr;
1356*53ee8cc1Swenshuai.xi   while (size) {
1357*53ee8cc1Swenshuai.xi     if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
1358*53ee8cc1Swenshuai.xi       return -1;
1359*53ee8cc1Swenshuai.xi     if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
1360*53ee8cc1Swenshuai.xi         minfo.State != MEM_COMMIT || minfo.RegionSize > size)
1361*53ee8cc1Swenshuai.xi       return -1;
1362*53ee8cc1Swenshuai.xi     if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
1363*53ee8cc1Swenshuai.xi       return -1;
1364*53ee8cc1Swenshuai.xi     cptr += minfo.RegionSize;
1365*53ee8cc1Swenshuai.xi     size -= minfo.RegionSize;
1366*53ee8cc1Swenshuai.xi   }
1367*53ee8cc1Swenshuai.xi   return 0;
1368*53ee8cc1Swenshuai.xi }
1369*53ee8cc1Swenshuai.xi 
1370*53ee8cc1Swenshuai.xi #define CALL_MMAP(s)         win32mmap(s)
1371*53ee8cc1Swenshuai.xi #define CALL_MUNMAP(a, s)    win32munmap((a), (s))
1372*53ee8cc1Swenshuai.xi #define DIRECT_MMAP(s)       win32direct_mmap(s)
1373*53ee8cc1Swenshuai.xi #endif /* WIN32 */
1374*53ee8cc1Swenshuai.xi #endif /* HAVE_MMAP */
1375*53ee8cc1Swenshuai.xi 
1376*53ee8cc1Swenshuai.xi #if HAVE_MMAP && HAVE_MREMAP
1377*53ee8cc1Swenshuai.xi #define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1378*53ee8cc1Swenshuai.xi #else  /* HAVE_MMAP && HAVE_MREMAP */
1379*53ee8cc1Swenshuai.xi #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1380*53ee8cc1Swenshuai.xi #endif /* HAVE_MMAP && HAVE_MREMAP */
1381*53ee8cc1Swenshuai.xi 
1382*53ee8cc1Swenshuai.xi #if HAVE_MORECORE
1383*53ee8cc1Swenshuai.xi #define CALL_MORECORE(S)     MORECORE(S)
1384*53ee8cc1Swenshuai.xi #else  /* HAVE_MORECORE */
1385*53ee8cc1Swenshuai.xi #define CALL_MORECORE(S)     MFAIL
1386*53ee8cc1Swenshuai.xi #endif /* HAVE_MORECORE */
1387*53ee8cc1Swenshuai.xi 
1388*53ee8cc1Swenshuai.xi /* mstate bit set if continguous morecore disabled or failed */
1389*53ee8cc1Swenshuai.xi #define USE_NONCONTIGUOUS_BIT (4U)
1390*53ee8cc1Swenshuai.xi 
1391*53ee8cc1Swenshuai.xi /* segment bit set in create_mspace_with_base */
1392*53ee8cc1Swenshuai.xi #define EXTERN_BIT            (8U)
1393*53ee8cc1Swenshuai.xi 
1394*53ee8cc1Swenshuai.xi 
1395*53ee8cc1Swenshuai.xi /* --------------------------- Lock preliminaries ------------------------ */
1396*53ee8cc1Swenshuai.xi 
1397*53ee8cc1Swenshuai.xi #if USE_LOCKS
1398*53ee8cc1Swenshuai.xi 
1399*53ee8cc1Swenshuai.xi /*
1400*53ee8cc1Swenshuai.xi   When locks are defined, there are up to two global locks:
1401*53ee8cc1Swenshuai.xi 
1402*53ee8cc1Swenshuai.xi   * If HAVE_MORECORE, morecore_mutex protects sequences of calls to
1403*53ee8cc1Swenshuai.xi     MORECORE.  In many cases sys_alloc requires two calls, that should
1404*53ee8cc1Swenshuai.xi     not be interleaved with calls by other threads.  This does not
1405*53ee8cc1Swenshuai.xi     protect against direct calls to MORECORE by other threads not
1406*53ee8cc1Swenshuai.xi     using this lock, so there is still code to cope the best we can on
1407*53ee8cc1Swenshuai.xi     interference.
1408*53ee8cc1Swenshuai.xi 
1409*53ee8cc1Swenshuai.xi   * magic_init_mutex ensures that mparams.magic and other
1410*53ee8cc1Swenshuai.xi     unique mparams values are initialized only once.
1411*53ee8cc1Swenshuai.xi */
1412*53ee8cc1Swenshuai.xi 
1413*53ee8cc1Swenshuai.xi #ifndef WIN32
1414*53ee8cc1Swenshuai.xi /* By default use posix locks */
1415*53ee8cc1Swenshuai.xi #if 0
1416*53ee8cc1Swenshuai.xi #include <pthread.h>
1417*53ee8cc1Swenshuai.xi #define MLOCK_T pthread_mutex_t
1418*53ee8cc1Swenshuai.xi #define INITIAL_LOCK(l)      pthread_mutex_init(l, NULL)
1419*53ee8cc1Swenshuai.xi #define ACQUIRE_LOCK(l)      pthread_mutex_lock(l)
1420*53ee8cc1Swenshuai.xi #define RELEASE_LOCK(l)      pthread_mutex_unlock(l)
1421*53ee8cc1Swenshuai.xi #endif
1422*53ee8cc1Swenshuai.xi 
1423*53ee8cc1Swenshuai.xi #define ENOMEM               12
1424*53ee8cc1Swenshuai.xi 
1425*53ee8cc1Swenshuai.xi #include "ucos.h"
1426*53ee8cc1Swenshuai.xi 
1427*53ee8cc1Swenshuai.xi #define INT8U                unsigned char
1428*53ee8cc1Swenshuai.xi 
1429*53ee8cc1Swenshuai.xi #define abort()              *((int*)0) = 0
1430*53ee8cc1Swenshuai.xi #define MLOCK_T              OS_EVENT*
1431*53ee8cc1Swenshuai.xi #define INITIAL_LOCK(l)      ((NULL == ((*(l)) = OSSemCreate(1)))? -1 : 0)
1432*53ee8cc1Swenshuai.xi #define ACQUIRE_LOCK(l)      ({ INT8U u8Err; OSSemPend((*(l)), 0, &u8Err); u8Err; })
1433*53ee8cc1Swenshuai.xi #define RELEASE_LOCK(l)      ({OSSemPost((*(l))); (0); })
1434*53ee8cc1Swenshuai.xi 
1435*53ee8cc1Swenshuai.xi #if HAVE_MORECORE
1436*53ee8cc1Swenshuai.xi // static MLOCK_T morecore_mutex = PTHREAD_MUTEX_INITIALIZER;
1437*53ee8cc1Swenshuai.xi static MLOCK_T morecore_mutex = NULL;
1438*53ee8cc1Swenshuai.xi #endif /* HAVE_MORECORE */
1439*53ee8cc1Swenshuai.xi 
1440*53ee8cc1Swenshuai.xi // static MLOCK_T magic_init_mutex = PTHREAD_MUTEX_INITIALIZER;
1441*53ee8cc1Swenshuai.xi static MLOCK_T magic_init_mutex = NULL;
1442*53ee8cc1Swenshuai.xi 
dlinit(void)1443*53ee8cc1Swenshuai.xi void dlinit(void)
1444*53ee8cc1Swenshuai.xi {
1445*53ee8cc1Swenshuai.xi #if HAVE_MORECORE
1446*53ee8cc1Swenshuai.xi     INITIAL_LOCK(&morecore_mutex);
1447*53ee8cc1Swenshuai.xi #endif
1448*53ee8cc1Swenshuai.xi     INITIAL_LOCK(&magic_init_mutex);
1449*53ee8cc1Swenshuai.xi }
1450*53ee8cc1Swenshuai.xi 
1451*53ee8cc1Swenshuai.xi #else /* WIN32 */
1452*53ee8cc1Swenshuai.xi /*
1453*53ee8cc1Swenshuai.xi    Because lock-protected regions have bounded times, and there
1454*53ee8cc1Swenshuai.xi    are no recursive lock calls, we can use simple spinlocks.
1455*53ee8cc1Swenshuai.xi */
1456*53ee8cc1Swenshuai.xi 
1457*53ee8cc1Swenshuai.xi #define MLOCK_T long
win32_acquire_lock(MLOCK_T * sl)1458*53ee8cc1Swenshuai.xi static int win32_acquire_lock (MLOCK_T *sl) {
1459*53ee8cc1Swenshuai.xi   for (;;) {
1460*53ee8cc1Swenshuai.xi #ifdef InterlockedCompareExchangePointer
1461*53ee8cc1Swenshuai.xi     if (!InterlockedCompareExchange(sl, 1, 0))
1462*53ee8cc1Swenshuai.xi       return 0;
1463*53ee8cc1Swenshuai.xi #else  /* Use older void* version */
1464*53ee8cc1Swenshuai.xi     if (!InterlockedCompareExchange((void**)sl, (void*)1, (void*)0))
1465*53ee8cc1Swenshuai.xi       return 0;
1466*53ee8cc1Swenshuai.xi #endif /* InterlockedCompareExchangePointer */
1467*53ee8cc1Swenshuai.xi     Sleep (0);
1468*53ee8cc1Swenshuai.xi   }
1469*53ee8cc1Swenshuai.xi }
1470*53ee8cc1Swenshuai.xi 
win32_release_lock(MLOCK_T * sl)1471*53ee8cc1Swenshuai.xi static void win32_release_lock (MLOCK_T *sl) {
1472*53ee8cc1Swenshuai.xi   InterlockedExchange (sl, 0);
1473*53ee8cc1Swenshuai.xi }
1474*53ee8cc1Swenshuai.xi 
1475*53ee8cc1Swenshuai.xi #define INITIAL_LOCK(l)      *(l)=0
1476*53ee8cc1Swenshuai.xi #define ACQUIRE_LOCK(l)      win32_acquire_lock(l)
1477*53ee8cc1Swenshuai.xi #define RELEASE_LOCK(l)      win32_release_lock(l)
1478*53ee8cc1Swenshuai.xi #if HAVE_MORECORE
1479*53ee8cc1Swenshuai.xi static MLOCK_T morecore_mutex;
1480*53ee8cc1Swenshuai.xi #endif /* HAVE_MORECORE */
1481*53ee8cc1Swenshuai.xi static MLOCK_T magic_init_mutex;
1482*53ee8cc1Swenshuai.xi #endif /* WIN32 */
1483*53ee8cc1Swenshuai.xi 
1484*53ee8cc1Swenshuai.xi #define USE_LOCK_BIT               (2U)
1485*53ee8cc1Swenshuai.xi #else  /* USE_LOCKS */
1486*53ee8cc1Swenshuai.xi #define USE_LOCK_BIT               (0U)
1487*53ee8cc1Swenshuai.xi #define INITIAL_LOCK(l)
1488*53ee8cc1Swenshuai.xi #endif /* USE_LOCKS */
1489*53ee8cc1Swenshuai.xi 
1490*53ee8cc1Swenshuai.xi #if USE_LOCKS && HAVE_MORECORE
1491*53ee8cc1Swenshuai.xi #define ACQUIRE_MORECORE_LOCK()    ACQUIRE_LOCK(&morecore_mutex);
1492*53ee8cc1Swenshuai.xi #define RELEASE_MORECORE_LOCK()    RELEASE_LOCK(&morecore_mutex);
1493*53ee8cc1Swenshuai.xi #else /* USE_LOCKS && HAVE_MORECORE */
1494*53ee8cc1Swenshuai.xi #define ACQUIRE_MORECORE_LOCK()
1495*53ee8cc1Swenshuai.xi #define RELEASE_MORECORE_LOCK()
1496*53ee8cc1Swenshuai.xi #endif /* USE_LOCKS && HAVE_MORECORE */
1497*53ee8cc1Swenshuai.xi 
1498*53ee8cc1Swenshuai.xi #if USE_LOCKS
1499*53ee8cc1Swenshuai.xi #define ACQUIRE_MAGIC_INIT_LOCK()  ACQUIRE_LOCK(&magic_init_mutex);
1500*53ee8cc1Swenshuai.xi #define RELEASE_MAGIC_INIT_LOCK()  RELEASE_LOCK(&magic_init_mutex);
1501*53ee8cc1Swenshuai.xi #else  /* USE_LOCKS */
1502*53ee8cc1Swenshuai.xi #define ACQUIRE_MAGIC_INIT_LOCK()
1503*53ee8cc1Swenshuai.xi #define RELEASE_MAGIC_INIT_LOCK()
1504*53ee8cc1Swenshuai.xi #endif /* USE_LOCKS */
1505*53ee8cc1Swenshuai.xi 
1506*53ee8cc1Swenshuai.xi 
1507*53ee8cc1Swenshuai.xi /* -----------------------  Chunk representations ------------------------ */
1508*53ee8cc1Swenshuai.xi 
1509*53ee8cc1Swenshuai.xi /*
1510*53ee8cc1Swenshuai.xi   (The following includes lightly edited explanations by Colin Plumb.)
1511*53ee8cc1Swenshuai.xi 
1512*53ee8cc1Swenshuai.xi   The malloc_chunk declaration below is misleading (but accurate and
1513*53ee8cc1Swenshuai.xi   necessary).  It declares a "view" into memory allowing access to
1514*53ee8cc1Swenshuai.xi   necessary fields at known offsets from a given base.
1515*53ee8cc1Swenshuai.xi 
1516*53ee8cc1Swenshuai.xi   Chunks of memory are maintained using a `boundary tag' method as
1517*53ee8cc1Swenshuai.xi   originally described by Knuth.  (See the paper by Paul Wilson
1518*53ee8cc1Swenshuai.xi   ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
1519*53ee8cc1Swenshuai.xi   techniques.)  Sizes of free chunks are stored both in the front of
1520*53ee8cc1Swenshuai.xi   each chunk and at the end.  This makes consolidating fragmented
1521*53ee8cc1Swenshuai.xi   chunks into bigger chunks fast.  The head fields also hold bits
1522*53ee8cc1Swenshuai.xi   representing whether chunks are free or in use.
1523*53ee8cc1Swenshuai.xi 
1524*53ee8cc1Swenshuai.xi   Here are some pictures to make it clearer.  They are "exploded" to
1525*53ee8cc1Swenshuai.xi   show that the state of a chunk can be thought of as extending from
1526*53ee8cc1Swenshuai.xi   the high 31 bits of the head field of its header through the
1527*53ee8cc1Swenshuai.xi   prev_foot and PINUSE_BIT bit of the following chunk header.
1528*53ee8cc1Swenshuai.xi 
1529*53ee8cc1Swenshuai.xi   A chunk that's in use looks like:
1530*53ee8cc1Swenshuai.xi 
1531*53ee8cc1Swenshuai.xi    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1532*53ee8cc1Swenshuai.xi            | Size of previous chunk (if P = 1)                             |
1533*53ee8cc1Swenshuai.xi            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1534*53ee8cc1Swenshuai.xi          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1535*53ee8cc1Swenshuai.xi          | Size of this chunk                                         1| +-+
1536*53ee8cc1Swenshuai.xi    mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1537*53ee8cc1Swenshuai.xi          |                                                               |
1538*53ee8cc1Swenshuai.xi          +-                                                             -+
1539*53ee8cc1Swenshuai.xi          |                                                               |
1540*53ee8cc1Swenshuai.xi          +-                                                             -+
1541*53ee8cc1Swenshuai.xi          |                                                               :
1542*53ee8cc1Swenshuai.xi          +-      size - sizeof(size_t) available payload bytes          -+
1543*53ee8cc1Swenshuai.xi          :                                                               |
1544*53ee8cc1Swenshuai.xi  chunk-> +-                                                             -+
1545*53ee8cc1Swenshuai.xi          |                                                               |
1546*53ee8cc1Swenshuai.xi          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1547*53ee8cc1Swenshuai.xi        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
1548*53ee8cc1Swenshuai.xi        | Size of next chunk (may or may not be in use)               | +-+
1549*53ee8cc1Swenshuai.xi  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1550*53ee8cc1Swenshuai.xi 
1551*53ee8cc1Swenshuai.xi     And if it's free, it looks like this:
1552*53ee8cc1Swenshuai.xi 
1553*53ee8cc1Swenshuai.xi    chunk-> +-                                                             -+
1554*53ee8cc1Swenshuai.xi            | User payload (must be in use, or we would have merged!)       |
1555*53ee8cc1Swenshuai.xi            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1556*53ee8cc1Swenshuai.xi          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1557*53ee8cc1Swenshuai.xi          | Size of this chunk                                         0| +-+
1558*53ee8cc1Swenshuai.xi    mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1559*53ee8cc1Swenshuai.xi          | Next pointer                                                  |
1560*53ee8cc1Swenshuai.xi          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1561*53ee8cc1Swenshuai.xi          | Prev pointer                                                  |
1562*53ee8cc1Swenshuai.xi          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1563*53ee8cc1Swenshuai.xi          |                                                               :
1564*53ee8cc1Swenshuai.xi          +-      size - sizeof(struct chunk) unused bytes               -+
1565*53ee8cc1Swenshuai.xi          :                                                               |
1566*53ee8cc1Swenshuai.xi  chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1567*53ee8cc1Swenshuai.xi          | Size of this chunk                                            |
1568*53ee8cc1Swenshuai.xi          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1569*53ee8cc1Swenshuai.xi        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
1570*53ee8cc1Swenshuai.xi        | Size of next chunk (must be in use, or we would have merged)| +-+
1571*53ee8cc1Swenshuai.xi  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1572*53ee8cc1Swenshuai.xi        |                                                               :
1573*53ee8cc1Swenshuai.xi        +- User payload                                                -+
1574*53ee8cc1Swenshuai.xi        :                                                               |
1575*53ee8cc1Swenshuai.xi        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1576*53ee8cc1Swenshuai.xi                                                                      |0|
1577*53ee8cc1Swenshuai.xi                                                                      +-+
1578*53ee8cc1Swenshuai.xi   Note that since we always merge adjacent free chunks, the chunks
1579*53ee8cc1Swenshuai.xi   adjacent to a free chunk must be in use.
1580*53ee8cc1Swenshuai.xi 
1581*53ee8cc1Swenshuai.xi   Given a pointer to a chunk (which can be derived trivially from the
1582*53ee8cc1Swenshuai.xi   payload pointer) we can, in O(1) time, find out whether the adjacent
1583*53ee8cc1Swenshuai.xi   chunks are free, and if so, unlink them from the lists that they
1584*53ee8cc1Swenshuai.xi   are on and merge them with the current chunk.
1585*53ee8cc1Swenshuai.xi 
1586*53ee8cc1Swenshuai.xi   Chunks always begin on even word boundaries, so the mem portion
1587*53ee8cc1Swenshuai.xi   (which is returned to the user) is also on an even word boundary, and
1588*53ee8cc1Swenshuai.xi   thus at least double-word aligned.
1589*53ee8cc1Swenshuai.xi 
1590*53ee8cc1Swenshuai.xi   The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
1591*53ee8cc1Swenshuai.xi   chunk size (which is always a multiple of two words), is an in-use
1592*53ee8cc1Swenshuai.xi   bit for the *previous* chunk.  If that bit is *clear*, then the
1593*53ee8cc1Swenshuai.xi   word before the current chunk size contains the previous chunk
1594*53ee8cc1Swenshuai.xi   size, and can be used to find the front of the previous chunk.
1595*53ee8cc1Swenshuai.xi   The very first chunk allocated always has this bit set, preventing
1596*53ee8cc1Swenshuai.xi   access to non-existent (or non-owned) memory. If pinuse is set for
1597*53ee8cc1Swenshuai.xi   any given chunk, then you CANNOT determine the size of the
1598*53ee8cc1Swenshuai.xi   previous chunk, and might even get a memory addressing fault when
1599*53ee8cc1Swenshuai.xi   trying to do so.
1600*53ee8cc1Swenshuai.xi 
1601*53ee8cc1Swenshuai.xi   The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
1602*53ee8cc1Swenshuai.xi   the chunk size redundantly records whether the current chunk is
1603*53ee8cc1Swenshuai.xi   inuse. This redundancy enables usage checks within free and realloc,
1604*53ee8cc1Swenshuai.xi   and reduces indirection when freeing and consolidating chunks.
1605*53ee8cc1Swenshuai.xi 
1606*53ee8cc1Swenshuai.xi   Each freshly allocated chunk must have both cinuse and pinuse set.
1607*53ee8cc1Swenshuai.xi   That is, each allocated chunk borders either a previously allocated
1608*53ee8cc1Swenshuai.xi   and still in-use chunk, or the base of its memory arena. This is
1609*53ee8cc1Swenshuai.xi   ensured by making all allocations from the the `lowest' part of any
1610*53ee8cc1Swenshuai.xi   found chunk.  Further, no free chunk physically borders another one,
1611*53ee8cc1Swenshuai.xi   so each free chunk is known to be preceded and followed by either
1612*53ee8cc1Swenshuai.xi   inuse chunks or the ends of memory.
1613*53ee8cc1Swenshuai.xi 
1614*53ee8cc1Swenshuai.xi   Note that the `foot' of the current chunk is actually represented
1615*53ee8cc1Swenshuai.xi   as the prev_foot of the NEXT chunk. This makes it easier to
1616*53ee8cc1Swenshuai.xi   deal with alignments etc but can be very confusing when trying
1617*53ee8cc1Swenshuai.xi   to extend or adapt this code.
1618*53ee8cc1Swenshuai.xi 
1619*53ee8cc1Swenshuai.xi   The exceptions to all this are
1620*53ee8cc1Swenshuai.xi 
1621*53ee8cc1Swenshuai.xi      1. The special chunk `top' is the top-most available chunk (i.e.,
1622*53ee8cc1Swenshuai.xi         the one bordering the end of available memory). It is treated
1623*53ee8cc1Swenshuai.xi         specially.  Top is never included in any bin, is used only if
1624*53ee8cc1Swenshuai.xi         no other chunk is available, and is released back to the
1625*53ee8cc1Swenshuai.xi         system if it is very large (see M_TRIM_THRESHOLD).  In effect,
1626*53ee8cc1Swenshuai.xi         the top chunk is treated as larger (and thus less well
1627*53ee8cc1Swenshuai.xi         fitting) than any other available chunk.  The top chunk
1628*53ee8cc1Swenshuai.xi         doesn't update its trailing size field since there is no next
1629*53ee8cc1Swenshuai.xi         contiguous chunk that would have to index off it. However,
1630*53ee8cc1Swenshuai.xi         space is still allocated for it (TOP_FOOT_SIZE) to enable
1631*53ee8cc1Swenshuai.xi         separation or merging when space is extended.
1632*53ee8cc1Swenshuai.xi 
1633*53ee8cc1Swenshuai.xi      3. Chunks allocated via mmap, which have the lowest-order bit
1634*53ee8cc1Swenshuai.xi         (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
1635*53ee8cc1Swenshuai.xi         PINUSE_BIT in their head fields.  Because they are allocated
1636*53ee8cc1Swenshuai.xi         one-by-one, each must carry its own prev_foot field, which is
1637*53ee8cc1Swenshuai.xi         also used to hold the offset this chunk has within its mmapped
1638*53ee8cc1Swenshuai.xi         region, which is needed to preserve alignment. Each mmapped
1639*53ee8cc1Swenshuai.xi         chunk is trailed by the first two fields of a fake next-chunk
1640*53ee8cc1Swenshuai.xi         for sake of usage checks.
1641*53ee8cc1Swenshuai.xi 
1642*53ee8cc1Swenshuai.xi */
1643*53ee8cc1Swenshuai.xi 
1644*53ee8cc1Swenshuai.xi #if 0 // 0 moved out to dlmalloc.h
1645*53ee8cc1Swenshuai.xi struct malloc_chunk {
1646*53ee8cc1Swenshuai.xi   size_t               prev_foot;  /* Size of previous chunk (if free).  */
1647*53ee8cc1Swenshuai.xi   size_t               head;       /* Size and inuse bits. */
1648*53ee8cc1Swenshuai.xi   struct malloc_chunk* fd;         /* double links -- used only if free. */
1649*53ee8cc1Swenshuai.xi   struct malloc_chunk* bk;
1650*53ee8cc1Swenshuai.xi };
1651*53ee8cc1Swenshuai.xi 
1652*53ee8cc1Swenshuai.xi typedef struct malloc_chunk  mchunk;
1653*53ee8cc1Swenshuai.xi typedef struct malloc_chunk* mchunkptr;
1654*53ee8cc1Swenshuai.xi typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
1655*53ee8cc1Swenshuai.xi typedef unsigned int bindex_t;         /* Described below */
1656*53ee8cc1Swenshuai.xi typedef unsigned int binmap_t;         /* Described below */
1657*53ee8cc1Swenshuai.xi typedef unsigned int flag_t;           /* The type of various bit flag sets */
1658*53ee8cc1Swenshuai.xi 
1659*53ee8cc1Swenshuai.xi /* ------------------- Chunks sizes and alignments ----------------------- */
1660*53ee8cc1Swenshuai.xi 
1661*53ee8cc1Swenshuai.xi #define MCHUNK_SIZE         (sizeof(mchunk))
1662*53ee8cc1Swenshuai.xi 
1663*53ee8cc1Swenshuai.xi #if FOOTERS
1664*53ee8cc1Swenshuai.xi #define CHUNK_OVERHEAD      (TWO_SIZE_T_SIZES)
1665*53ee8cc1Swenshuai.xi #else /* FOOTERS */
1666*53ee8cc1Swenshuai.xi #define CHUNK_OVERHEAD      (SIZE_T_SIZE)
1667*53ee8cc1Swenshuai.xi #endif /* FOOTERS */
1668*53ee8cc1Swenshuai.xi 
1669*53ee8cc1Swenshuai.xi /* MMapped chunks need a second word of overhead ... */
1670*53ee8cc1Swenshuai.xi #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
1671*53ee8cc1Swenshuai.xi /* ... and additional padding for fake next-chunk at foot */
1672*53ee8cc1Swenshuai.xi #define MMAP_FOOT_PAD       (FOUR_SIZE_T_SIZES)
1673*53ee8cc1Swenshuai.xi 
1674*53ee8cc1Swenshuai.xi /* The smallest size we can malloc is an aligned minimal chunk */
1675*53ee8cc1Swenshuai.xi #define MIN_CHUNK_SIZE\
1676*53ee8cc1Swenshuai.xi   ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1677*53ee8cc1Swenshuai.xi 
1678*53ee8cc1Swenshuai.xi /* conversion from malloc headers to user pointers, and back */
1679*53ee8cc1Swenshuai.xi #define chunk2mem(p)        ((void*)((char*)(p)       + TWO_SIZE_T_SIZES))
1680*53ee8cc1Swenshuai.xi #define mem2chunk(mem)      ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
1681*53ee8cc1Swenshuai.xi /* chunk associated with aligned address A */
1682*53ee8cc1Swenshuai.xi #define align_as_chunk(A)   (mchunkptr)((A) + align_offset(chunk2mem(A)))
1683*53ee8cc1Swenshuai.xi 
1684*53ee8cc1Swenshuai.xi /* Bounds on request (not chunk) sizes. */
1685*53ee8cc1Swenshuai.xi #define MAX_REQUEST         ((-MIN_CHUNK_SIZE) << 2)
1686*53ee8cc1Swenshuai.xi #define MIN_REQUEST         (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
1687*53ee8cc1Swenshuai.xi 
1688*53ee8cc1Swenshuai.xi /* pad request bytes into a usable size */
1689*53ee8cc1Swenshuai.xi #define pad_request(req) \
1690*53ee8cc1Swenshuai.xi    (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1691*53ee8cc1Swenshuai.xi 
1692*53ee8cc1Swenshuai.xi /* pad request, checking for minimum (but not maximum) */
1693*53ee8cc1Swenshuai.xi #define request2size(req) \
1694*53ee8cc1Swenshuai.xi   (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
1695*53ee8cc1Swenshuai.xi 
1696*53ee8cc1Swenshuai.xi 
1697*53ee8cc1Swenshuai.xi /* ------------------ Operations on head and foot fields ----------------- */
1698*53ee8cc1Swenshuai.xi 
1699*53ee8cc1Swenshuai.xi /*
1700*53ee8cc1Swenshuai.xi   The head field of a chunk is or'ed with PINUSE_BIT when previous
1701*53ee8cc1Swenshuai.xi   adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
1702*53ee8cc1Swenshuai.xi   use. If the chunk was obtained with mmap, the prev_foot field has
1703*53ee8cc1Swenshuai.xi   IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
1704*53ee8cc1Swenshuai.xi   mmapped region to the base of the chunk.
1705*53ee8cc1Swenshuai.xi */
1706*53ee8cc1Swenshuai.xi 
1707*53ee8cc1Swenshuai.xi #define PINUSE_BIT          (SIZE_T_ONE)
1708*53ee8cc1Swenshuai.xi #define CINUSE_BIT          (SIZE_T_TWO)
1709*53ee8cc1Swenshuai.xi #define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
1710*53ee8cc1Swenshuai.xi 
1711*53ee8cc1Swenshuai.xi /* Head value for fenceposts */
1712*53ee8cc1Swenshuai.xi #define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
1713*53ee8cc1Swenshuai.xi 
1714*53ee8cc1Swenshuai.xi /* extraction of fields from head words */
1715*53ee8cc1Swenshuai.xi #define cinuse(p)           ((p)->head & CINUSE_BIT)
1716*53ee8cc1Swenshuai.xi #define pinuse(p)           ((p)->head & PINUSE_BIT)
1717*53ee8cc1Swenshuai.xi #define chunksize(p)        ((p)->head & ~(INUSE_BITS))
1718*53ee8cc1Swenshuai.xi 
1719*53ee8cc1Swenshuai.xi #define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
1720*53ee8cc1Swenshuai.xi #define clear_cinuse(p)     ((p)->head &= ~CINUSE_BIT)
1721*53ee8cc1Swenshuai.xi 
1722*53ee8cc1Swenshuai.xi /* Treat space at ptr +/- offset as a chunk */
1723*53ee8cc1Swenshuai.xi #define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
1724*53ee8cc1Swenshuai.xi #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
1725*53ee8cc1Swenshuai.xi 
1726*53ee8cc1Swenshuai.xi /* Ptr to next or previous physical malloc_chunk. */
1727*53ee8cc1Swenshuai.xi #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
1728*53ee8cc1Swenshuai.xi #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
1729*53ee8cc1Swenshuai.xi 
1730*53ee8cc1Swenshuai.xi /* extract next chunk's pinuse bit */
1731*53ee8cc1Swenshuai.xi #define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
1732*53ee8cc1Swenshuai.xi 
1733*53ee8cc1Swenshuai.xi /* Get/set size at footer */
1734*53ee8cc1Swenshuai.xi #define get_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot)
1735*53ee8cc1Swenshuai.xi #define set_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
1736*53ee8cc1Swenshuai.xi 
1737*53ee8cc1Swenshuai.xi /* Set size, pinuse bit, and foot */
1738*53ee8cc1Swenshuai.xi #define set_size_and_pinuse_of_free_chunk(p, s)\
1739*53ee8cc1Swenshuai.xi   ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
1740*53ee8cc1Swenshuai.xi 
1741*53ee8cc1Swenshuai.xi /* Set size, pinuse bit, foot, and clear next pinuse */
1742*53ee8cc1Swenshuai.xi #define set_free_with_pinuse(p, s, n)\
1743*53ee8cc1Swenshuai.xi   (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
1744*53ee8cc1Swenshuai.xi 
1745*53ee8cc1Swenshuai.xi #define is_mmapped(p)\
1746*53ee8cc1Swenshuai.xi   (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
1747*53ee8cc1Swenshuai.xi 
1748*53ee8cc1Swenshuai.xi /* Get the internal overhead associated with chunk p */
1749*53ee8cc1Swenshuai.xi #define overhead_for(p)\
1750*53ee8cc1Swenshuai.xi  (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
1751*53ee8cc1Swenshuai.xi 
1752*53ee8cc1Swenshuai.xi /* Return true if malloced space is not necessarily cleared */
1753*53ee8cc1Swenshuai.xi #if MMAP_CLEARS
1754*53ee8cc1Swenshuai.xi #define calloc_must_clear(p) (!is_mmapped(p))
1755*53ee8cc1Swenshuai.xi #else /* MMAP_CLEARS */
1756*53ee8cc1Swenshuai.xi #define calloc_must_clear(p) (1)
1757*53ee8cc1Swenshuai.xi #endif /* MMAP_CLEARS */
1758*53ee8cc1Swenshuai.xi 
1759*53ee8cc1Swenshuai.xi #endif  // 0 moved out to dlmalloc.h
1760*53ee8cc1Swenshuai.xi 
1761*53ee8cc1Swenshuai.xi /* ---------------------- Overlaid data structures ----------------------- */
1762*53ee8cc1Swenshuai.xi 
1763*53ee8cc1Swenshuai.xi /*
1764*53ee8cc1Swenshuai.xi   When chunks are not in use, they are treated as nodes of either
1765*53ee8cc1Swenshuai.xi   lists or trees.
1766*53ee8cc1Swenshuai.xi 
1767*53ee8cc1Swenshuai.xi   "Small"  chunks are stored in circular doubly-linked lists, and look
1768*53ee8cc1Swenshuai.xi   like this:
1769*53ee8cc1Swenshuai.xi 
1770*53ee8cc1Swenshuai.xi     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1771*53ee8cc1Swenshuai.xi             |             Size of previous chunk                            |
1772*53ee8cc1Swenshuai.xi             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1773*53ee8cc1Swenshuai.xi     `head:' |             Size of chunk, in bytes                         |P|
1774*53ee8cc1Swenshuai.xi       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1775*53ee8cc1Swenshuai.xi             |             Forward pointer to next chunk in list             |
1776*53ee8cc1Swenshuai.xi             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1777*53ee8cc1Swenshuai.xi             |             Back pointer to previous chunk in list            |
1778*53ee8cc1Swenshuai.xi             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1779*53ee8cc1Swenshuai.xi             |             Unused space (may be 0 bytes long)                .
1780*53ee8cc1Swenshuai.xi             .                                                               .
1781*53ee8cc1Swenshuai.xi             .                                                               |
1782*53ee8cc1Swenshuai.xi nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1783*53ee8cc1Swenshuai.xi     `foot:' |             Size of chunk, in bytes                           |
1784*53ee8cc1Swenshuai.xi             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1785*53ee8cc1Swenshuai.xi 
1786*53ee8cc1Swenshuai.xi   Larger chunks are kept in a form of bitwise digital trees (aka
1787*53ee8cc1Swenshuai.xi   tries) keyed on chunksizes.  Because malloc_tree_chunks are only for
1788*53ee8cc1Swenshuai.xi   free chunks greater than 256 bytes, their size doesn't impose any
1789*53ee8cc1Swenshuai.xi   constraints on user chunk sizes.  Each node looks like:
1790*53ee8cc1Swenshuai.xi 
1791*53ee8cc1Swenshuai.xi     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1792*53ee8cc1Swenshuai.xi             |             Size of previous chunk                            |
1793*53ee8cc1Swenshuai.xi             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1794*53ee8cc1Swenshuai.xi     `head:' |             Size of chunk, in bytes                         |P|
1795*53ee8cc1Swenshuai.xi       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1796*53ee8cc1Swenshuai.xi             |             Forward pointer to next chunk of same size        |
1797*53ee8cc1Swenshuai.xi             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1798*53ee8cc1Swenshuai.xi             |             Back pointer to previous chunk of same size       |
1799*53ee8cc1Swenshuai.xi             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1800*53ee8cc1Swenshuai.xi             |             Pointer to left child (child[0])                  |
1801*53ee8cc1Swenshuai.xi             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1802*53ee8cc1Swenshuai.xi             |             Pointer to right child (child[1])                 |
1803*53ee8cc1Swenshuai.xi             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1804*53ee8cc1Swenshuai.xi             |             Pointer to parent                                 |
1805*53ee8cc1Swenshuai.xi             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1806*53ee8cc1Swenshuai.xi             |             bin index of this chunk                           |
1807*53ee8cc1Swenshuai.xi             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1808*53ee8cc1Swenshuai.xi             |             Unused space                                      .
1809*53ee8cc1Swenshuai.xi             .                                                               |
1810*53ee8cc1Swenshuai.xi nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1811*53ee8cc1Swenshuai.xi     `foot:' |             Size of chunk, in bytes                           |
1812*53ee8cc1Swenshuai.xi             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1813*53ee8cc1Swenshuai.xi 
1814*53ee8cc1Swenshuai.xi   Each tree holding treenodes is a tree of unique chunk sizes.  Chunks
1815*53ee8cc1Swenshuai.xi   of the same size are arranged in a circularly-linked list, with only
1816*53ee8cc1Swenshuai.xi   the oldest chunk (the next to be used, in our FIFO ordering)
1817*53ee8cc1Swenshuai.xi   actually in the tree.  (Tree members are distinguished by a non-null
1818*53ee8cc1Swenshuai.xi   parent pointer.)  If a chunk with the same size an an existing node
1819*53ee8cc1Swenshuai.xi   is inserted, it is linked off the existing node using pointers that
1820*53ee8cc1Swenshuai.xi   work in the same way as fd/bk pointers of small chunks.
1821*53ee8cc1Swenshuai.xi 
1822*53ee8cc1Swenshuai.xi   Each tree contains a power of 2 sized range of chunk sizes (the
1823*53ee8cc1Swenshuai.xi   smallest is 0x100 <= x < 0x180), which is is divided in half at each
1824*53ee8cc1Swenshuai.xi   tree level, with the chunks in the smaller half of the range (0x100
1825*53ee8cc1Swenshuai.xi   <= x < 0x140 for the top nose) in the left subtree and the larger
1826*53ee8cc1Swenshuai.xi   half (0x140 <= x < 0x180) in the right subtree.  This is, of course,
1827*53ee8cc1Swenshuai.xi   done by inspecting individual bits.
1828*53ee8cc1Swenshuai.xi 
1829*53ee8cc1Swenshuai.xi   Using these rules, each node's left subtree contains all smaller
1830*53ee8cc1Swenshuai.xi   sizes than its right subtree.  However, the node at the root of each
1831*53ee8cc1Swenshuai.xi   subtree has no particular ordering relationship to either.  (The
1832*53ee8cc1Swenshuai.xi   dividing line between the subtree sizes is based on trie relation.)
1833*53ee8cc1Swenshuai.xi   If we remove the last chunk of a given size from the interior of the
1834*53ee8cc1Swenshuai.xi   tree, we need to replace it with a leaf node.  The tree ordering
1835*53ee8cc1Swenshuai.xi   rules permit a node to be replaced by any leaf below it.
1836*53ee8cc1Swenshuai.xi 
1837*53ee8cc1Swenshuai.xi   The smallest chunk in a tree (a common operation in a best-fit
1838*53ee8cc1Swenshuai.xi   allocator) can be found by walking a path to the leftmost leaf in
1839*53ee8cc1Swenshuai.xi   the tree.  Unlike a usual binary tree, where we follow left child
1840*53ee8cc1Swenshuai.xi   pointers until we reach a null, here we follow the right child
1841*53ee8cc1Swenshuai.xi   pointer any time the left one is null, until we reach a leaf with
1842*53ee8cc1Swenshuai.xi   both child pointers null. The smallest chunk in the tree will be
1843*53ee8cc1Swenshuai.xi   somewhere along that path.
1844*53ee8cc1Swenshuai.xi 
1845*53ee8cc1Swenshuai.xi   The worst case number of steps to add, find, or remove a node is
1846*53ee8cc1Swenshuai.xi   bounded by the number of bits differentiating chunks within
1847*53ee8cc1Swenshuai.xi   bins. Under current bin calculations, this ranges from 6 up to 21
1848*53ee8cc1Swenshuai.xi   (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
1849*53ee8cc1Swenshuai.xi   is of course much better.
1850*53ee8cc1Swenshuai.xi */
1851*53ee8cc1Swenshuai.xi 
1852*53ee8cc1Swenshuai.xi struct malloc_tree_chunk {
1853*53ee8cc1Swenshuai.xi   /* The first four fields must be compatible with malloc_chunk */
1854*53ee8cc1Swenshuai.xi   size_t                    prev_foot;
1855*53ee8cc1Swenshuai.xi   size_t                    head;
1856*53ee8cc1Swenshuai.xi   struct malloc_tree_chunk* fd;
1857*53ee8cc1Swenshuai.xi   struct malloc_tree_chunk* bk;
1858*53ee8cc1Swenshuai.xi 
1859*53ee8cc1Swenshuai.xi   struct malloc_tree_chunk* child[2];
1860*53ee8cc1Swenshuai.xi   struct malloc_tree_chunk* parent;
1861*53ee8cc1Swenshuai.xi   bindex_t                  index;
1862*53ee8cc1Swenshuai.xi };
1863*53ee8cc1Swenshuai.xi 
1864*53ee8cc1Swenshuai.xi typedef struct malloc_tree_chunk  tchunk;
1865*53ee8cc1Swenshuai.xi typedef struct malloc_tree_chunk* tchunkptr;
1866*53ee8cc1Swenshuai.xi typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
1867*53ee8cc1Swenshuai.xi 
1868*53ee8cc1Swenshuai.xi /* A little helper macro for trees */
1869*53ee8cc1Swenshuai.xi #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
1870*53ee8cc1Swenshuai.xi 
1871*53ee8cc1Swenshuai.xi /* ----------------------------- Segments -------------------------------- */
1872*53ee8cc1Swenshuai.xi 
1873*53ee8cc1Swenshuai.xi /*
1874*53ee8cc1Swenshuai.xi   Each malloc space may include non-contiguous segments, held in a
1875*53ee8cc1Swenshuai.xi   list headed by an embedded malloc_segment record representing the
1876*53ee8cc1Swenshuai.xi   top-most space. Segments also include flags holding properties of
1877*53ee8cc1Swenshuai.xi   the space. Large chunks that are directly allocated by mmap are not
1878*53ee8cc1Swenshuai.xi   included in this list. They are instead independently created and
1879*53ee8cc1Swenshuai.xi   destroyed without otherwise keeping track of them.
1880*53ee8cc1Swenshuai.xi 
1881*53ee8cc1Swenshuai.xi   Segment management mainly comes into play for spaces allocated by
1882*53ee8cc1Swenshuai.xi   MMAP.  Any call to MMAP might or might not return memory that is
1883*53ee8cc1Swenshuai.xi   adjacent to an existing segment.  MORECORE normally contiguously
1884*53ee8cc1Swenshuai.xi   extends the current space, so this space is almost always adjacent,
1885*53ee8cc1Swenshuai.xi   which is simpler and faster to deal with. (This is why MORECORE is
1886*53ee8cc1Swenshuai.xi   used preferentially to MMAP when both are available -- see
1887*53ee8cc1Swenshuai.xi   sys_alloc.)  When allocating using MMAP, we don't use any of the
1888*53ee8cc1Swenshuai.xi   hinting mechanisms (inconsistently) supported in various
1889*53ee8cc1Swenshuai.xi   implementations of unix mmap, or distinguish reserving from
1890*53ee8cc1Swenshuai.xi   committing memory. Instead, we just ask for space, and exploit
1891*53ee8cc1Swenshuai.xi   contiguity when we get it.  It is probably possible to do
1892*53ee8cc1Swenshuai.xi   better than this on some systems, but no general scheme seems
1893*53ee8cc1Swenshuai.xi   to be significantly better.
1894*53ee8cc1Swenshuai.xi 
1895*53ee8cc1Swenshuai.xi   Management entails a simpler variant of the consolidation scheme
1896*53ee8cc1Swenshuai.xi   used for chunks to reduce fragmentation -- new adjacent memory is
1897*53ee8cc1Swenshuai.xi   normally prepended or appended to an existing segment. However,
1898*53ee8cc1Swenshuai.xi   there are limitations compared to chunk consolidation that mostly
1899*53ee8cc1Swenshuai.xi   reflect the fact that segment processing is relatively infrequent
1900*53ee8cc1Swenshuai.xi   (occurring only when getting memory from system) and that we
1901*53ee8cc1Swenshuai.xi   don't expect to have huge numbers of segments:
1902*53ee8cc1Swenshuai.xi 
1903*53ee8cc1Swenshuai.xi   * Segments are not indexed, so traversal requires linear scans.  (It
1904*53ee8cc1Swenshuai.xi     would be possible to index these, but is not worth the extra
1905*53ee8cc1Swenshuai.xi     overhead and complexity for most programs on most platforms.)
1906*53ee8cc1Swenshuai.xi   * New segments are only appended to old ones when holding top-most
1907*53ee8cc1Swenshuai.xi     memory; if they cannot be prepended to others, they are held in
1908*53ee8cc1Swenshuai.xi     different segments.
1909*53ee8cc1Swenshuai.xi 
1910*53ee8cc1Swenshuai.xi   Except for the top-most segment of an mstate, each segment record
1911*53ee8cc1Swenshuai.xi   is kept at the tail of its segment. Segments are added by pushing
1912*53ee8cc1Swenshuai.xi   segment records onto the list headed by &mstate.seg for the
1913*53ee8cc1Swenshuai.xi   containing mstate.
1914*53ee8cc1Swenshuai.xi 
1915*53ee8cc1Swenshuai.xi   Segment flags control allocation/merge/deallocation policies:
1916*53ee8cc1Swenshuai.xi   * If EXTERN_BIT set, then we did not allocate this segment,
1917*53ee8cc1Swenshuai.xi     and so should not try to deallocate or merge with others.
1918*53ee8cc1Swenshuai.xi     (This currently holds only for the initial segment passed
1919*53ee8cc1Swenshuai.xi     into create_mspace_with_base.)
1920*53ee8cc1Swenshuai.xi   * If IS_MMAPPED_BIT set, the segment may be merged with
1921*53ee8cc1Swenshuai.xi     other surrounding mmapped segments and trimmed/de-allocated
1922*53ee8cc1Swenshuai.xi     using munmap.
1923*53ee8cc1Swenshuai.xi   * If neither bit is set, then the segment was obtained using
1924*53ee8cc1Swenshuai.xi     MORECORE so can be merged with surrounding MORECORE'd segments
1925*53ee8cc1Swenshuai.xi     and deallocated/trimmed using MORECORE with negative arguments.
1926*53ee8cc1Swenshuai.xi */
1927*53ee8cc1Swenshuai.xi 
1928*53ee8cc1Swenshuai.xi struct malloc_segment {
1929*53ee8cc1Swenshuai.xi   char*        base;             /* base address */
1930*53ee8cc1Swenshuai.xi   size_t       size;             /* allocated size */
1931*53ee8cc1Swenshuai.xi   struct malloc_segment* next;   /* ptr to next segment */
1932*53ee8cc1Swenshuai.xi   flag_t       sflags;           /* mmap and extern flag */
1933*53ee8cc1Swenshuai.xi };
1934*53ee8cc1Swenshuai.xi 
1935*53ee8cc1Swenshuai.xi #define is_mmapped_segment(S)  ((S)->sflags & IS_MMAPPED_BIT)
1936*53ee8cc1Swenshuai.xi #define is_extern_segment(S)   ((S)->sflags & EXTERN_BIT)
1937*53ee8cc1Swenshuai.xi 
1938*53ee8cc1Swenshuai.xi typedef struct malloc_segment  msegment;
1939*53ee8cc1Swenshuai.xi typedef struct malloc_segment* msegmentptr;
1940*53ee8cc1Swenshuai.xi 
1941*53ee8cc1Swenshuai.xi /* ---------------------------- malloc_state ----------------------------- */
1942*53ee8cc1Swenshuai.xi 
1943*53ee8cc1Swenshuai.xi /*
1944*53ee8cc1Swenshuai.xi    A malloc_state holds all of the bookkeeping for a space.
1945*53ee8cc1Swenshuai.xi    The main fields are:
1946*53ee8cc1Swenshuai.xi 
1947*53ee8cc1Swenshuai.xi   Top
1948*53ee8cc1Swenshuai.xi     The topmost chunk of the currently active segment. Its size is
1949*53ee8cc1Swenshuai.xi     cached in topsize.  The actual size of topmost space is
1950*53ee8cc1Swenshuai.xi     topsize+TOP_FOOT_SIZE, which includes space reserved for adding
1951*53ee8cc1Swenshuai.xi     fenceposts and segment records if necessary when getting more
1952*53ee8cc1Swenshuai.xi     space from the system.  The size at which to autotrim top is
1953*53ee8cc1Swenshuai.xi     cached from mparams in trim_check, except that it is disabled if
1954*53ee8cc1Swenshuai.xi     an autotrim fails.
1955*53ee8cc1Swenshuai.xi 
1956*53ee8cc1Swenshuai.xi   Designated victim (dv)
1957*53ee8cc1Swenshuai.xi     This is the preferred chunk for servicing small requests that
1958*53ee8cc1Swenshuai.xi     don't have exact fits.  It is normally the chunk split off most
1959*53ee8cc1Swenshuai.xi     recently to service another small request.  Its size is cached in
1960*53ee8cc1Swenshuai.xi     dvsize. The link fields of this chunk are not maintained since it
1961*53ee8cc1Swenshuai.xi     is not kept in a bin.
1962*53ee8cc1Swenshuai.xi 
1963*53ee8cc1Swenshuai.xi   SmallBins
1964*53ee8cc1Swenshuai.xi     An array of bin headers for free chunks.  These bins hold chunks
1965*53ee8cc1Swenshuai.xi     with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
1966*53ee8cc1Swenshuai.xi     chunks of all the same size, spaced 8 bytes apart.  To simplify
1967*53ee8cc1Swenshuai.xi     use in double-linked lists, each bin header acts as a malloc_chunk
1968*53ee8cc1Swenshuai.xi     pointing to the real first node, if it exists (else pointing to
1969*53ee8cc1Swenshuai.xi     itself).  This avoids special-casing for headers.  But to avoid
1970*53ee8cc1Swenshuai.xi     waste, we allocate only the fd/bk pointers of bins, and then use
1971*53ee8cc1Swenshuai.xi     repositioning tricks to treat these as the fields of a chunk.
1972*53ee8cc1Swenshuai.xi 
1973*53ee8cc1Swenshuai.xi   TreeBins
1974*53ee8cc1Swenshuai.xi     Treebins are pointers to the roots of trees holding a range of
1975*53ee8cc1Swenshuai.xi     sizes. There are 2 equally spaced treebins for each power of two
1976*53ee8cc1Swenshuai.xi     from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
1977*53ee8cc1Swenshuai.xi     larger.
1978*53ee8cc1Swenshuai.xi 
1979*53ee8cc1Swenshuai.xi   Bin maps
1980*53ee8cc1Swenshuai.xi     There is one bit map for small bins ("smallmap") and one for
1981*53ee8cc1Swenshuai.xi     treebins ("treemap).  Each bin sets its bit when non-empty, and
1982*53ee8cc1Swenshuai.xi     clears the bit when empty.  Bit operations are then used to avoid
1983*53ee8cc1Swenshuai.xi     bin-by-bin searching -- nearly all "search" is done without ever
1984*53ee8cc1Swenshuai.xi     looking at bins that won't be selected.  The bit maps
1985*53ee8cc1Swenshuai.xi     conservatively use 32 bits per map word, even if on 64bit system.
1986*53ee8cc1Swenshuai.xi     For a good description of some of the bit-based techniques used
1987*53ee8cc1Swenshuai.xi     here, see Henry S. Warren Jr's book "Hacker's Delight" (and
1988*53ee8cc1Swenshuai.xi     supplement at http://hackersdelight.org/). Many of these are
1989*53ee8cc1Swenshuai.xi     intended to reduce the branchiness of paths through malloc etc, as
1990*53ee8cc1Swenshuai.xi     well as to reduce the number of memory locations read or written.
1991*53ee8cc1Swenshuai.xi 
1992*53ee8cc1Swenshuai.xi   Segments
1993*53ee8cc1Swenshuai.xi     A list of segments headed by an embedded malloc_segment record
1994*53ee8cc1Swenshuai.xi     representing the initial space.
1995*53ee8cc1Swenshuai.xi 
1996*53ee8cc1Swenshuai.xi   Address check support
1997*53ee8cc1Swenshuai.xi     The least_addr field is the least address ever obtained from
1998*53ee8cc1Swenshuai.xi     MORECORE or MMAP. Attempted frees and reallocs of any address less
1999*53ee8cc1Swenshuai.xi     than this are trapped (unless INSECURE is defined).
2000*53ee8cc1Swenshuai.xi 
2001*53ee8cc1Swenshuai.xi   Magic tag
2002*53ee8cc1Swenshuai.xi     A cross-check field that should always hold same value as mparams.magic.
2003*53ee8cc1Swenshuai.xi 
2004*53ee8cc1Swenshuai.xi   Flags
2005*53ee8cc1Swenshuai.xi     Bits recording whether to use MMAP, locks, or contiguous MORECORE
2006*53ee8cc1Swenshuai.xi 
2007*53ee8cc1Swenshuai.xi   Statistics
2008*53ee8cc1Swenshuai.xi     Each space keeps track of current and maximum system memory
2009*53ee8cc1Swenshuai.xi     obtained via MORECORE or MMAP.
2010*53ee8cc1Swenshuai.xi 
2011*53ee8cc1Swenshuai.xi   Locking
2012*53ee8cc1Swenshuai.xi     If USE_LOCKS is defined, the "mutex" lock is acquired and released
2013*53ee8cc1Swenshuai.xi     around every public call using this mspace.
2014*53ee8cc1Swenshuai.xi */
2015*53ee8cc1Swenshuai.xi 
2016*53ee8cc1Swenshuai.xi /* Bin types, widths and sizes */
2017*53ee8cc1Swenshuai.xi #define NSMALLBINS        (32U)
2018*53ee8cc1Swenshuai.xi #define NTREEBINS         (32U)
2019*53ee8cc1Swenshuai.xi #define SMALLBIN_SHIFT    (3U)
2020*53ee8cc1Swenshuai.xi #define SMALLBIN_WIDTH    (SIZE_T_ONE << SMALLBIN_SHIFT)
2021*53ee8cc1Swenshuai.xi #define TREEBIN_SHIFT     (8U)
2022*53ee8cc1Swenshuai.xi #define MIN_LARGE_SIZE    (SIZE_T_ONE << TREEBIN_SHIFT)
2023*53ee8cc1Swenshuai.xi #define MAX_SMALL_SIZE    (MIN_LARGE_SIZE - SIZE_T_ONE)
2024*53ee8cc1Swenshuai.xi #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
2025*53ee8cc1Swenshuai.xi 
2026*53ee8cc1Swenshuai.xi struct malloc_state {
2027*53ee8cc1Swenshuai.xi   binmap_t   smallmap;
2028*53ee8cc1Swenshuai.xi   binmap_t   treemap;
2029*53ee8cc1Swenshuai.xi   size_t     dvsize;
2030*53ee8cc1Swenshuai.xi   size_t     topsize;
2031*53ee8cc1Swenshuai.xi   char*      least_addr;
2032*53ee8cc1Swenshuai.xi   mchunkptr  dv;
2033*53ee8cc1Swenshuai.xi   mchunkptr  top;
2034*53ee8cc1Swenshuai.xi   size_t     trim_check;
2035*53ee8cc1Swenshuai.xi   size_t     magic;
2036*53ee8cc1Swenshuai.xi   mchunkptr  smallbins[(NSMALLBINS+1)*2];
2037*53ee8cc1Swenshuai.xi   tbinptr    treebins[NTREEBINS];
2038*53ee8cc1Swenshuai.xi   size_t     footprint;
2039*53ee8cc1Swenshuai.xi   size_t     max_footprint;
2040*53ee8cc1Swenshuai.xi   flag_t     mflags;
2041*53ee8cc1Swenshuai.xi #if USE_LOCKS
2042*53ee8cc1Swenshuai.xi   MLOCK_T    mutex;     /* locate lock among fields that rarely change */
2043*53ee8cc1Swenshuai.xi #endif /* USE_LOCKS */
2044*53ee8cc1Swenshuai.xi   msegment   seg;
2045*53ee8cc1Swenshuai.xi };
2046*53ee8cc1Swenshuai.xi 
2047*53ee8cc1Swenshuai.xi typedef struct malloc_state*    mstate;
2048*53ee8cc1Swenshuai.xi 
2049*53ee8cc1Swenshuai.xi /* ------------- Global malloc_state and malloc_params ------------------- */
2050*53ee8cc1Swenshuai.xi 
2051*53ee8cc1Swenshuai.xi /*
2052*53ee8cc1Swenshuai.xi   malloc_params holds global properties, including those that can be
2053*53ee8cc1Swenshuai.xi   dynamically set using mallopt. There is a single instance, mparams,
2054*53ee8cc1Swenshuai.xi   initialized in init_mparams.
2055*53ee8cc1Swenshuai.xi */
2056*53ee8cc1Swenshuai.xi 
2057*53ee8cc1Swenshuai.xi struct malloc_params {
2058*53ee8cc1Swenshuai.xi   size_t magic;
2059*53ee8cc1Swenshuai.xi   size_t page_size;
2060*53ee8cc1Swenshuai.xi   size_t granularity;
2061*53ee8cc1Swenshuai.xi   size_t mmap_threshold;
2062*53ee8cc1Swenshuai.xi   size_t trim_threshold;
2063*53ee8cc1Swenshuai.xi   flag_t default_mflags;
2064*53ee8cc1Swenshuai.xi };
2065*53ee8cc1Swenshuai.xi 
2066*53ee8cc1Swenshuai.xi static struct malloc_params mparams;
2067*53ee8cc1Swenshuai.xi 
2068*53ee8cc1Swenshuai.xi /* The global malloc_state used for all non-"mspace" calls */
2069*53ee8cc1Swenshuai.xi static struct malloc_state _gm_;
2070*53ee8cc1Swenshuai.xi #define gm                 (&_gm_)
2071*53ee8cc1Swenshuai.xi #define is_global(M)       ((M) == &_gm_)
2072*53ee8cc1Swenshuai.xi #define is_initialized(M)  ((M)->top != 0)
2073*53ee8cc1Swenshuai.xi 
2074*53ee8cc1Swenshuai.xi /* -------------------------- system alloc setup ------------------------- */
2075*53ee8cc1Swenshuai.xi 
2076*53ee8cc1Swenshuai.xi /* Operations on mflags */
2077*53ee8cc1Swenshuai.xi 
2078*53ee8cc1Swenshuai.xi #define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
2079*53ee8cc1Swenshuai.xi #define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
2080*53ee8cc1Swenshuai.xi #define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
2081*53ee8cc1Swenshuai.xi 
2082*53ee8cc1Swenshuai.xi #define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
2083*53ee8cc1Swenshuai.xi #define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
2084*53ee8cc1Swenshuai.xi #define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
2085*53ee8cc1Swenshuai.xi 
2086*53ee8cc1Swenshuai.xi #define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
2087*53ee8cc1Swenshuai.xi #define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
2088*53ee8cc1Swenshuai.xi 
2089*53ee8cc1Swenshuai.xi #define set_lock(M,L)\
2090*53ee8cc1Swenshuai.xi  ((M)->mflags = (L)?\
2091*53ee8cc1Swenshuai.xi   ((M)->mflags | USE_LOCK_BIT) :\
2092*53ee8cc1Swenshuai.xi   ((M)->mflags & ~USE_LOCK_BIT))
2093*53ee8cc1Swenshuai.xi 
2094*53ee8cc1Swenshuai.xi /* page-align a size */
2095*53ee8cc1Swenshuai.xi #define page_align(S)\
2096*53ee8cc1Swenshuai.xi  (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
2097*53ee8cc1Swenshuai.xi 
2098*53ee8cc1Swenshuai.xi /* granularity-align a size */
2099*53ee8cc1Swenshuai.xi #define granularity_align(S)\
2100*53ee8cc1Swenshuai.xi   (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
2101*53ee8cc1Swenshuai.xi 
2102*53ee8cc1Swenshuai.xi #define is_page_aligned(S)\
2103*53ee8cc1Swenshuai.xi    (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2104*53ee8cc1Swenshuai.xi #define is_granularity_aligned(S)\
2105*53ee8cc1Swenshuai.xi    (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2106*53ee8cc1Swenshuai.xi 
2107*53ee8cc1Swenshuai.xi /*  True if segment S holds address A */
2108*53ee8cc1Swenshuai.xi #define segment_holds(S, A)\
2109*53ee8cc1Swenshuai.xi   ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2110*53ee8cc1Swenshuai.xi 
2111*53ee8cc1Swenshuai.xi /* Return segment holding given address */
segment_holding(mstate m,char * addr)2112*53ee8cc1Swenshuai.xi static msegmentptr segment_holding(mstate m, char* addr) {
2113*53ee8cc1Swenshuai.xi   msegmentptr sp = &m->seg;
2114*53ee8cc1Swenshuai.xi   for (;;) {
2115*53ee8cc1Swenshuai.xi     if (addr >= sp->base && addr < sp->base + sp->size)
2116*53ee8cc1Swenshuai.xi       return sp;
2117*53ee8cc1Swenshuai.xi     if ((sp = sp->next) == 0)
2118*53ee8cc1Swenshuai.xi       return 0;
2119*53ee8cc1Swenshuai.xi   }
2120*53ee8cc1Swenshuai.xi }
2121*53ee8cc1Swenshuai.xi 
2122*53ee8cc1Swenshuai.xi /* Return true if segment contains a segment link */
has_segment_link(mstate m,msegmentptr ss)2123*53ee8cc1Swenshuai.xi static int has_segment_link(mstate m, msegmentptr ss) {
2124*53ee8cc1Swenshuai.xi   msegmentptr sp = &m->seg;
2125*53ee8cc1Swenshuai.xi   for (;;) {
2126*53ee8cc1Swenshuai.xi     if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
2127*53ee8cc1Swenshuai.xi       return 1;
2128*53ee8cc1Swenshuai.xi     if ((sp = sp->next) == 0)
2129*53ee8cc1Swenshuai.xi       return 0;
2130*53ee8cc1Swenshuai.xi   }
2131*53ee8cc1Swenshuai.xi }
2132*53ee8cc1Swenshuai.xi 
2133*53ee8cc1Swenshuai.xi #ifndef MORECORE_CANNOT_TRIM
2134*53ee8cc1Swenshuai.xi #define should_trim(M,s)  ((s) > (M)->trim_check)
2135*53ee8cc1Swenshuai.xi #else  /* MORECORE_CANNOT_TRIM */
2136*53ee8cc1Swenshuai.xi #define should_trim(M,s)  (0)
2137*53ee8cc1Swenshuai.xi #endif /* MORECORE_CANNOT_TRIM */
2138*53ee8cc1Swenshuai.xi 
2139*53ee8cc1Swenshuai.xi /*
2140*53ee8cc1Swenshuai.xi   TOP_FOOT_SIZE is padding at the end of a segment, including space
2141*53ee8cc1Swenshuai.xi   that may be needed to place segment records and fenceposts when new
2142*53ee8cc1Swenshuai.xi   noncontiguous segments are added.
2143*53ee8cc1Swenshuai.xi */
2144*53ee8cc1Swenshuai.xi #define TOP_FOOT_SIZE\
2145*53ee8cc1Swenshuai.xi   (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2146*53ee8cc1Swenshuai.xi 
2147*53ee8cc1Swenshuai.xi 
2148*53ee8cc1Swenshuai.xi /* -------------------------------  Hooks -------------------------------- */
2149*53ee8cc1Swenshuai.xi 
2150*53ee8cc1Swenshuai.xi /*
2151*53ee8cc1Swenshuai.xi   PREACTION should be defined to return 0 on success, and nonzero on
2152*53ee8cc1Swenshuai.xi   failure. If you are not using locking, you can redefine these to do
2153*53ee8cc1Swenshuai.xi   anything you like.
2154*53ee8cc1Swenshuai.xi */
2155*53ee8cc1Swenshuai.xi 
2156*53ee8cc1Swenshuai.xi #if USE_LOCKS
2157*53ee8cc1Swenshuai.xi 
2158*53ee8cc1Swenshuai.xi /* Ensure locks are initialized */
2159*53ee8cc1Swenshuai.xi #define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
2160*53ee8cc1Swenshuai.xi 
2161*53ee8cc1Swenshuai.xi #define PREACTION(M)  ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2162*53ee8cc1Swenshuai.xi #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2163*53ee8cc1Swenshuai.xi #else /* USE_LOCKS */
2164*53ee8cc1Swenshuai.xi 
2165*53ee8cc1Swenshuai.xi #ifndef PREACTION
2166*53ee8cc1Swenshuai.xi #define PREACTION(M) (0)
2167*53ee8cc1Swenshuai.xi #endif  /* PREACTION */
2168*53ee8cc1Swenshuai.xi 
2169*53ee8cc1Swenshuai.xi #ifndef POSTACTION
2170*53ee8cc1Swenshuai.xi #define POSTACTION(M)
2171*53ee8cc1Swenshuai.xi #endif  /* POSTACTION */
2172*53ee8cc1Swenshuai.xi 
2173*53ee8cc1Swenshuai.xi #endif /* USE_LOCKS */
2174*53ee8cc1Swenshuai.xi 
2175*53ee8cc1Swenshuai.xi /*
2176*53ee8cc1Swenshuai.xi   CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2177*53ee8cc1Swenshuai.xi   USAGE_ERROR_ACTION is triggered on detected bad frees and
2178*53ee8cc1Swenshuai.xi   reallocs. The argument p is an address that might have triggered the
2179*53ee8cc1Swenshuai.xi   fault. It is ignored by the two predefined actions, but might be
2180*53ee8cc1Swenshuai.xi   useful in custom actions that try to help diagnose errors.
2181*53ee8cc1Swenshuai.xi */
2182*53ee8cc1Swenshuai.xi 
2183*53ee8cc1Swenshuai.xi #if PROCEED_ON_ERROR
2184*53ee8cc1Swenshuai.xi 
2185*53ee8cc1Swenshuai.xi /* A count of the number of corruption errors causing resets */
2186*53ee8cc1Swenshuai.xi int malloc_corruption_error_count;
2187*53ee8cc1Swenshuai.xi 
2188*53ee8cc1Swenshuai.xi /* default corruption action */
2189*53ee8cc1Swenshuai.xi static void reset_on_error(mstate m);
2190*53ee8cc1Swenshuai.xi 
2191*53ee8cc1Swenshuai.xi #define CORRUPTION_ERROR_ACTION(m)  reset_on_error(m)
2192*53ee8cc1Swenshuai.xi #define USAGE_ERROR_ACTION(m, p)
2193*53ee8cc1Swenshuai.xi 
2194*53ee8cc1Swenshuai.xi #else /* PROCEED_ON_ERROR */
2195*53ee8cc1Swenshuai.xi 
2196*53ee8cc1Swenshuai.xi #ifndef CORRUPTION_ERROR_ACTION
2197*53ee8cc1Swenshuai.xi #define CORRUPTION_ERROR_ACTION(m) ABORT
2198*53ee8cc1Swenshuai.xi #endif /* CORRUPTION_ERROR_ACTION */
2199*53ee8cc1Swenshuai.xi 
2200*53ee8cc1Swenshuai.xi #ifndef USAGE_ERROR_ACTION
2201*53ee8cc1Swenshuai.xi #define USAGE_ERROR_ACTION(m,p) ABORT
2202*53ee8cc1Swenshuai.xi #endif /* USAGE_ERROR_ACTION */
2203*53ee8cc1Swenshuai.xi 
2204*53ee8cc1Swenshuai.xi #endif /* PROCEED_ON_ERROR */
2205*53ee8cc1Swenshuai.xi 
2206*53ee8cc1Swenshuai.xi /* -------------------------- Debugging setup ---------------------------- */
2207*53ee8cc1Swenshuai.xi 
2208*53ee8cc1Swenshuai.xi #if ! DEBUG
2209*53ee8cc1Swenshuai.xi 
2210*53ee8cc1Swenshuai.xi #define check_free_chunk(M,P)
2211*53ee8cc1Swenshuai.xi #define check_inuse_chunk(M,P)
2212*53ee8cc1Swenshuai.xi #define check_malloced_chunk(M,P,N)
2213*53ee8cc1Swenshuai.xi #define check_mmapped_chunk(M,P)
2214*53ee8cc1Swenshuai.xi #define check_malloc_state(M)
2215*53ee8cc1Swenshuai.xi #define check_top_chunk(M,P)
2216*53ee8cc1Swenshuai.xi 
2217*53ee8cc1Swenshuai.xi #else /* DEBUG */
2218*53ee8cc1Swenshuai.xi #define check_free_chunk(M,P)       do_check_free_chunk(M,P)
2219*53ee8cc1Swenshuai.xi #define check_inuse_chunk(M,P)      do_check_inuse_chunk(M,P)
2220*53ee8cc1Swenshuai.xi #define check_top_chunk(M,P)        do_check_top_chunk(M,P)
2221*53ee8cc1Swenshuai.xi #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2222*53ee8cc1Swenshuai.xi #define check_mmapped_chunk(M,P)    do_check_mmapped_chunk(M,P)
2223*53ee8cc1Swenshuai.xi #define check_malloc_state(M)       do_check_malloc_state(M)
2224*53ee8cc1Swenshuai.xi 
2225*53ee8cc1Swenshuai.xi static void   do_check_any_chunk(mstate m, mchunkptr p);
2226*53ee8cc1Swenshuai.xi static void   do_check_top_chunk(mstate m, mchunkptr p);
2227*53ee8cc1Swenshuai.xi static void   do_check_mmapped_chunk(mstate m, mchunkptr p);
2228*53ee8cc1Swenshuai.xi static void   do_check_inuse_chunk(mstate m, mchunkptr p);
2229*53ee8cc1Swenshuai.xi static void   do_check_free_chunk(mstate m, mchunkptr p);
2230*53ee8cc1Swenshuai.xi static void   do_check_malloced_chunk(mstate m, void* mem, size_t s);
2231*53ee8cc1Swenshuai.xi static void   do_check_tree(mstate m, tchunkptr t);
2232*53ee8cc1Swenshuai.xi static void   do_check_treebin(mstate m, bindex_t i);
2233*53ee8cc1Swenshuai.xi static void   do_check_smallbin(mstate m, bindex_t i);
2234*53ee8cc1Swenshuai.xi static void   do_check_malloc_state(mstate m);
2235*53ee8cc1Swenshuai.xi static int    bin_find(mstate m, mchunkptr x);
2236*53ee8cc1Swenshuai.xi static size_t traverse_and_check(mstate m);
2237*53ee8cc1Swenshuai.xi #endif /* DEBUG */
2238*53ee8cc1Swenshuai.xi 
2239*53ee8cc1Swenshuai.xi /* ---------------------------- Indexing Bins ---------------------------- */
2240*53ee8cc1Swenshuai.xi 
2241*53ee8cc1Swenshuai.xi #define is_small(s)         (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2242*53ee8cc1Swenshuai.xi #define small_index(s)      ((s)  >> SMALLBIN_SHIFT)
2243*53ee8cc1Swenshuai.xi #define small_index2size(i) ((i)  << SMALLBIN_SHIFT)
2244*53ee8cc1Swenshuai.xi #define MIN_SMALL_INDEX     (small_index(MIN_CHUNK_SIZE))
2245*53ee8cc1Swenshuai.xi 
2246*53ee8cc1Swenshuai.xi /* addressing by index. See above about smallbin repositioning */
2247*53ee8cc1Swenshuai.xi #define smallbin_at(M, i)   ((sbinptr)((unsigned long)&((M)->smallbins[(i)<<1])))
2248*53ee8cc1Swenshuai.xi #define treebin_at(M,i)     (&((M)->treebins[i]))
2249*53ee8cc1Swenshuai.xi 
2250*53ee8cc1Swenshuai.xi /* assign tree index for size S to variable I */
2251*53ee8cc1Swenshuai.xi #if defined(__GNUC__) && defined(i386)
2252*53ee8cc1Swenshuai.xi #define compute_tree_index(S, I)\
2253*53ee8cc1Swenshuai.xi {\
2254*53ee8cc1Swenshuai.xi   size_t X = S >> TREEBIN_SHIFT;\
2255*53ee8cc1Swenshuai.xi   if (X == 0)\
2256*53ee8cc1Swenshuai.xi     I = 0;\
2257*53ee8cc1Swenshuai.xi   else if (X > 0xFFFF)\
2258*53ee8cc1Swenshuai.xi     I = NTREEBINS-1;\
2259*53ee8cc1Swenshuai.xi   else {\
2260*53ee8cc1Swenshuai.xi     unsigned int K;\
2261*53ee8cc1Swenshuai.xi     __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm"  (X));\
2262*53ee8cc1Swenshuai.xi     I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2263*53ee8cc1Swenshuai.xi   }\
2264*53ee8cc1Swenshuai.xi }
2265*53ee8cc1Swenshuai.xi #else /* GNUC */
2266*53ee8cc1Swenshuai.xi #define compute_tree_index(S, I)\
2267*53ee8cc1Swenshuai.xi {\
2268*53ee8cc1Swenshuai.xi   size_t X = S >> TREEBIN_SHIFT;\
2269*53ee8cc1Swenshuai.xi   if (X == 0)\
2270*53ee8cc1Swenshuai.xi     I = 0;\
2271*53ee8cc1Swenshuai.xi   else if (X > 0xFFFF)\
2272*53ee8cc1Swenshuai.xi     I = NTREEBINS-1;\
2273*53ee8cc1Swenshuai.xi   else {\
2274*53ee8cc1Swenshuai.xi     unsigned int Y = (unsigned int)X;\
2275*53ee8cc1Swenshuai.xi     unsigned int N = ((Y - 0x100) >> 16) & 8;\
2276*53ee8cc1Swenshuai.xi     unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2277*53ee8cc1Swenshuai.xi     N += K;\
2278*53ee8cc1Swenshuai.xi     N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2279*53ee8cc1Swenshuai.xi     K = 14 - N + ((Y <<= K) >> 15);\
2280*53ee8cc1Swenshuai.xi     I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2281*53ee8cc1Swenshuai.xi   }\
2282*53ee8cc1Swenshuai.xi }
2283*53ee8cc1Swenshuai.xi #endif /* GNUC */
2284*53ee8cc1Swenshuai.xi 
2285*53ee8cc1Swenshuai.xi /* Bit representing maximum resolved size in a treebin at i */
2286*53ee8cc1Swenshuai.xi #define bit_for_tree_index(i) \
2287*53ee8cc1Swenshuai.xi    (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2288*53ee8cc1Swenshuai.xi 
2289*53ee8cc1Swenshuai.xi /* Shift placing maximum resolved bit in a treebin at i as sign bit */
2290*53ee8cc1Swenshuai.xi #define leftshift_for_tree_index(i) \
2291*53ee8cc1Swenshuai.xi    ((i == NTREEBINS-1)? 0 : \
2292*53ee8cc1Swenshuai.xi     ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2293*53ee8cc1Swenshuai.xi 
2294*53ee8cc1Swenshuai.xi /* The size of the smallest chunk held in bin with index i */
2295*53ee8cc1Swenshuai.xi #define minsize_for_tree_index(i) \
2296*53ee8cc1Swenshuai.xi    ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) |  \
2297*53ee8cc1Swenshuai.xi    (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2298*53ee8cc1Swenshuai.xi 
2299*53ee8cc1Swenshuai.xi 
2300*53ee8cc1Swenshuai.xi /* ------------------------ Operations on bin maps ----------------------- */
2301*53ee8cc1Swenshuai.xi 
2302*53ee8cc1Swenshuai.xi /* bit corresponding to given index */
2303*53ee8cc1Swenshuai.xi #define idx2bit(i)              ((binmap_t)(1) << (i))
2304*53ee8cc1Swenshuai.xi 
2305*53ee8cc1Swenshuai.xi /* Mark/Clear bits with given index */
2306*53ee8cc1Swenshuai.xi #define mark_smallmap(M,i)      ((M)->smallmap |=  idx2bit(i))
2307*53ee8cc1Swenshuai.xi #define clear_smallmap(M,i)     ((M)->smallmap &= ~idx2bit(i))
2308*53ee8cc1Swenshuai.xi #define smallmap_is_marked(M,i) ((M)->smallmap &   idx2bit(i))
2309*53ee8cc1Swenshuai.xi 
2310*53ee8cc1Swenshuai.xi #define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
2311*53ee8cc1Swenshuai.xi #define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
2312*53ee8cc1Swenshuai.xi #define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
2313*53ee8cc1Swenshuai.xi 
2314*53ee8cc1Swenshuai.xi /* index corresponding to given bit */
2315*53ee8cc1Swenshuai.xi 
2316*53ee8cc1Swenshuai.xi #if defined(__GNUC__) && defined(i386)
2317*53ee8cc1Swenshuai.xi #define compute_bit2idx(X, I)\
2318*53ee8cc1Swenshuai.xi {\
2319*53ee8cc1Swenshuai.xi   unsigned int J;\
2320*53ee8cc1Swenshuai.xi   __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
2321*53ee8cc1Swenshuai.xi   I = (bindex_t)J;\
2322*53ee8cc1Swenshuai.xi }
2323*53ee8cc1Swenshuai.xi 
2324*53ee8cc1Swenshuai.xi #else /* GNUC */
2325*53ee8cc1Swenshuai.xi #if  USE_BUILTIN_FFS
2326*53ee8cc1Swenshuai.xi #define compute_bit2idx(X, I) I = ffs(X)-1
2327*53ee8cc1Swenshuai.xi 
2328*53ee8cc1Swenshuai.xi #else /* USE_BUILTIN_FFS */
2329*53ee8cc1Swenshuai.xi #define compute_bit2idx(X, I)\
2330*53ee8cc1Swenshuai.xi {\
2331*53ee8cc1Swenshuai.xi   unsigned int Y = X - 1;\
2332*53ee8cc1Swenshuai.xi   unsigned int K = Y >> (16-4) & 16;\
2333*53ee8cc1Swenshuai.xi   unsigned int N = K;        Y >>= K;\
2334*53ee8cc1Swenshuai.xi   N += K = Y >> (8-3) &  8;  Y >>= K;\
2335*53ee8cc1Swenshuai.xi   N += K = Y >> (4-2) &  4;  Y >>= K;\
2336*53ee8cc1Swenshuai.xi   N += K = Y >> (2-1) &  2;  Y >>= K;\
2337*53ee8cc1Swenshuai.xi   N += K = Y >> (1-0) &  1;  Y >>= K;\
2338*53ee8cc1Swenshuai.xi   I = (bindex_t)(N + Y);\
2339*53ee8cc1Swenshuai.xi }
2340*53ee8cc1Swenshuai.xi #endif /* USE_BUILTIN_FFS */
2341*53ee8cc1Swenshuai.xi #endif /* GNUC */
2342*53ee8cc1Swenshuai.xi 
2343*53ee8cc1Swenshuai.xi /* isolate the least set bit of a bitmap */
2344*53ee8cc1Swenshuai.xi #define least_bit(x)         ((x) & -(x))
2345*53ee8cc1Swenshuai.xi 
2346*53ee8cc1Swenshuai.xi /* mask with all bits to left of least bit of x on */
2347*53ee8cc1Swenshuai.xi #define left_bits(x)         ((x<<1) | -(x<<1))
2348*53ee8cc1Swenshuai.xi 
2349*53ee8cc1Swenshuai.xi /* mask with all bits to left of or equal to least bit of x on */
2350*53ee8cc1Swenshuai.xi #define same_or_left_bits(x) ((x) | -(x))
2351*53ee8cc1Swenshuai.xi 
2352*53ee8cc1Swenshuai.xi 
2353*53ee8cc1Swenshuai.xi /* ----------------------- Runtime Check Support ------------------------- */
2354*53ee8cc1Swenshuai.xi 
2355*53ee8cc1Swenshuai.xi /*
2356*53ee8cc1Swenshuai.xi   For security, the main invariant is that malloc/free/etc never
2357*53ee8cc1Swenshuai.xi   writes to a static address other than malloc_state, unless static
2358*53ee8cc1Swenshuai.xi   malloc_state itself has been corrupted, which cannot occur via
2359*53ee8cc1Swenshuai.xi   malloc (because of these checks). In essence this means that we
2360*53ee8cc1Swenshuai.xi   believe all pointers, sizes, maps etc held in malloc_state, but
2361*53ee8cc1Swenshuai.xi   check all of those linked or offsetted from other embedded data
2362*53ee8cc1Swenshuai.xi   structures.  These checks are interspersed with main code in a way
2363*53ee8cc1Swenshuai.xi   that tends to minimize their run-time cost.
2364*53ee8cc1Swenshuai.xi 
2365*53ee8cc1Swenshuai.xi   When FOOTERS is defined, in addition to range checking, we also
2366*53ee8cc1Swenshuai.xi   verify footer fields of inuse chunks, which can be used guarantee
2367*53ee8cc1Swenshuai.xi   that the mstate controlling malloc/free is intact.  This is a
2368*53ee8cc1Swenshuai.xi   streamlined version of the approach described by William Robertson
2369*53ee8cc1Swenshuai.xi   et al in "Run-time Detection of Heap-based Overflows" LISA'03
2370*53ee8cc1Swenshuai.xi   http://www.usenix.org/events/lisa03/tech/robertson.html The footer
2371*53ee8cc1Swenshuai.xi   of an inuse chunk holds the xor of its mstate and a random seed,
2372*53ee8cc1Swenshuai.xi   that is checked upon calls to free() and realloc().  This is
2373*53ee8cc1Swenshuai.xi   (probablistically) unguessable from outside the program, but can be
2374*53ee8cc1Swenshuai.xi   computed by any code successfully malloc'ing any chunk, so does not
2375*53ee8cc1Swenshuai.xi   itself provide protection against code that has already broken
2376*53ee8cc1Swenshuai.xi   security through some other means.  Unlike Robertson et al, we
2377*53ee8cc1Swenshuai.xi   always dynamically check addresses of all offset chunks (previous,
2378*53ee8cc1Swenshuai.xi   next, etc). This turns out to be cheaper than relying on hashes.
2379*53ee8cc1Swenshuai.xi */
2380*53ee8cc1Swenshuai.xi 
2381*53ee8cc1Swenshuai.xi #if !INSECURE
2382*53ee8cc1Swenshuai.xi /* Check if address a is at least as high as any from MORECORE or MMAP */
2383*53ee8cc1Swenshuai.xi #define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
2384*53ee8cc1Swenshuai.xi /* Check if address of next chunk n is higher than base chunk p */
2385*53ee8cc1Swenshuai.xi #define ok_next(p, n)    ((char*)(p) < (char*)(n))
2386*53ee8cc1Swenshuai.xi /* Check if p has its cinuse bit on */
2387*53ee8cc1Swenshuai.xi #define ok_cinuse(p)     cinuse(p)
2388*53ee8cc1Swenshuai.xi /* Check if p has its pinuse bit on */
2389*53ee8cc1Swenshuai.xi #define ok_pinuse(p)     pinuse(p)
2390*53ee8cc1Swenshuai.xi 
2391*53ee8cc1Swenshuai.xi #else /* !INSECURE */
2392*53ee8cc1Swenshuai.xi #define ok_address(M, a) (1)
2393*53ee8cc1Swenshuai.xi #define ok_next(b, n)    (1)
2394*53ee8cc1Swenshuai.xi #define ok_cinuse(p)     (1)
2395*53ee8cc1Swenshuai.xi #define ok_pinuse(p)     (1)
2396*53ee8cc1Swenshuai.xi #endif /* !INSECURE */
2397*53ee8cc1Swenshuai.xi 
2398*53ee8cc1Swenshuai.xi #if (FOOTERS && !INSECURE)
2399*53ee8cc1Swenshuai.xi /* Check if (alleged) mstate m has expected magic field */
2400*53ee8cc1Swenshuai.xi #define ok_magic(M)      ((M)->magic == mparams.magic)
2401*53ee8cc1Swenshuai.xi #else  /* (FOOTERS && !INSECURE) */
2402*53ee8cc1Swenshuai.xi #define ok_magic(M)      (1)
2403*53ee8cc1Swenshuai.xi #endif /* (FOOTERS && !INSECURE) */
2404*53ee8cc1Swenshuai.xi 
2405*53ee8cc1Swenshuai.xi 
2406*53ee8cc1Swenshuai.xi /* In gcc, use __builtin_expect to minimize impact of checks */
2407*53ee8cc1Swenshuai.xi #if !INSECURE
2408*53ee8cc1Swenshuai.xi #if defined(__GNUC__) && __GNUC__ >= 3
2409*53ee8cc1Swenshuai.xi #define RTCHECK(e)  __builtin_expect(e, 1)
2410*53ee8cc1Swenshuai.xi #else /* GNUC */
2411*53ee8cc1Swenshuai.xi #define RTCHECK(e)  (e)
2412*53ee8cc1Swenshuai.xi #endif /* GNUC */
2413*53ee8cc1Swenshuai.xi #else /* !INSECURE */
2414*53ee8cc1Swenshuai.xi #define RTCHECK(e)  (1)
2415*53ee8cc1Swenshuai.xi #endif /* !INSECURE */
2416*53ee8cc1Swenshuai.xi 
2417*53ee8cc1Swenshuai.xi /* macros to set up inuse chunks with or without footers */
2418*53ee8cc1Swenshuai.xi 
2419*53ee8cc1Swenshuai.xi #if !FOOTERS
2420*53ee8cc1Swenshuai.xi 
2421*53ee8cc1Swenshuai.xi #define mark_inuse_foot(M,p,s)
2422*53ee8cc1Swenshuai.xi 
2423*53ee8cc1Swenshuai.xi /* Set cinuse bit and pinuse bit of next chunk */
2424*53ee8cc1Swenshuai.xi #define set_inuse(M,p,s)\
2425*53ee8cc1Swenshuai.xi   ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2426*53ee8cc1Swenshuai.xi   ((mchunkptr)(((unsigned long)(p)) + (s)))->head |= PINUSE_BIT)
2427*53ee8cc1Swenshuai.xi 
2428*53ee8cc1Swenshuai.xi /* Set cinuse and pinuse of this chunk and pinuse of next chunk */
2429*53ee8cc1Swenshuai.xi #define set_inuse_and_pinuse(M,p,s)\
2430*53ee8cc1Swenshuai.xi   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2431*53ee8cc1Swenshuai.xi   ((mchunkptr)(((unsigned long)(p)) + (s)))->head |= PINUSE_BIT)
2432*53ee8cc1Swenshuai.xi 
2433*53ee8cc1Swenshuai.xi /* Set size, cinuse and pinuse bit of this chunk */
2434*53ee8cc1Swenshuai.xi #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2435*53ee8cc1Swenshuai.xi   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
2436*53ee8cc1Swenshuai.xi 
2437*53ee8cc1Swenshuai.xi #else /* FOOTERS */
2438*53ee8cc1Swenshuai.xi 
2439*53ee8cc1Swenshuai.xi /* Set foot of inuse chunk to be xor of mstate and seed */
2440*53ee8cc1Swenshuai.xi #define mark_inuse_foot(M,p,s)\
2441*53ee8cc1Swenshuai.xi   (((mchunkptr)((unsigned long)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
2442*53ee8cc1Swenshuai.xi 
2443*53ee8cc1Swenshuai.xi #define get_mstate_for(p)\
2444*53ee8cc1Swenshuai.xi   ((mstate)(((mchunkptr)((unsigned long)(p) +\
2445*53ee8cc1Swenshuai.xi     (chunksize(p))))->prev_foot ^ mparams.magic))
2446*53ee8cc1Swenshuai.xi 
2447*53ee8cc1Swenshuai.xi #define set_inuse(M,p,s)\
2448*53ee8cc1Swenshuai.xi   ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2449*53ee8cc1Swenshuai.xi   (((mchunkptr)(((unsigned long)(p)) + (s)))->head |= PINUSE_BIT), \
2450*53ee8cc1Swenshuai.xi   mark_inuse_foot(M,p,s))
2451*53ee8cc1Swenshuai.xi 
2452*53ee8cc1Swenshuai.xi #define set_inuse_and_pinuse(M,p,s)\
2453*53ee8cc1Swenshuai.xi   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2454*53ee8cc1Swenshuai.xi   (((mchunkptr)(((unsigned long)(p)) + (s)))->head |= PINUSE_BIT),\
2455*53ee8cc1Swenshuai.xi  mark_inuse_foot(M,p,s))
2456*53ee8cc1Swenshuai.xi 
2457*53ee8cc1Swenshuai.xi #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2458*53ee8cc1Swenshuai.xi   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2459*53ee8cc1Swenshuai.xi   mark_inuse_foot(M, p, s))
2460*53ee8cc1Swenshuai.xi 
2461*53ee8cc1Swenshuai.xi #endif /* !FOOTERS */
2462*53ee8cc1Swenshuai.xi 
2463*53ee8cc1Swenshuai.xi /* ---------------------------- setting mparams -------------------------- */
2464*53ee8cc1Swenshuai.xi 
2465*53ee8cc1Swenshuai.xi /* Initialize mparams */
init_mparams(void)2466*53ee8cc1Swenshuai.xi static int init_mparams(void) {
2467*53ee8cc1Swenshuai.xi   if (mparams.page_size == 0) {
2468*53ee8cc1Swenshuai.xi     size_t s;
2469*53ee8cc1Swenshuai.xi 
2470*53ee8cc1Swenshuai.xi     mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
2471*53ee8cc1Swenshuai.xi     mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
2472*53ee8cc1Swenshuai.xi #if MORECORE_CONTIGUOUS
2473*53ee8cc1Swenshuai.xi     mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
2474*53ee8cc1Swenshuai.xi #else  /* MORECORE_CONTIGUOUS */
2475*53ee8cc1Swenshuai.xi     mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
2476*53ee8cc1Swenshuai.xi #endif /* MORECORE_CONTIGUOUS */
2477*53ee8cc1Swenshuai.xi 
2478*53ee8cc1Swenshuai.xi #if (FOOTERS && !INSECURE)
2479*53ee8cc1Swenshuai.xi     {
2480*53ee8cc1Swenshuai.xi #if USE_DEV_RANDOM
2481*53ee8cc1Swenshuai.xi       int fd;
2482*53ee8cc1Swenshuai.xi       unsigned char buf[sizeof(size_t)];
2483*53ee8cc1Swenshuai.xi       /* Try to use /dev/urandom, else fall back on using time */
2484*53ee8cc1Swenshuai.xi       if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
2485*53ee8cc1Swenshuai.xi           read(fd, buf, sizeof(buf)) == sizeof(buf)) {
2486*53ee8cc1Swenshuai.xi         s = *((size_t *) buf);
2487*53ee8cc1Swenshuai.xi         close(fd);
2488*53ee8cc1Swenshuai.xi       }
2489*53ee8cc1Swenshuai.xi       else
2490*53ee8cc1Swenshuai.xi #endif /* USE_DEV_RANDOM */
2491*53ee8cc1Swenshuai.xi         s = (size_t)(time(0) ^ (size_t)0x55555555U);
2492*53ee8cc1Swenshuai.xi 
2493*53ee8cc1Swenshuai.xi       s |= (size_t)8U;    /* ensure nonzero */
2494*53ee8cc1Swenshuai.xi       s &= ~(size_t)7U;   /* improve chances of fault for bad values */
2495*53ee8cc1Swenshuai.xi 
2496*53ee8cc1Swenshuai.xi     }
2497*53ee8cc1Swenshuai.xi #else /* (FOOTERS && !INSECURE) */
2498*53ee8cc1Swenshuai.xi     s = (size_t)0x58585858U;
2499*53ee8cc1Swenshuai.xi #endif /* (FOOTERS && !INSECURE) */
2500*53ee8cc1Swenshuai.xi     {
2501*53ee8cc1Swenshuai.xi         int ret_value=ACQUIRE_MAGIC_INIT_LOCK();
2502*53ee8cc1Swenshuai.xi         if(ret_value)printf("%s.%d lock error\n",__FUNCTION__,__LINE__);
2503*53ee8cc1Swenshuai.xi     }
2504*53ee8cc1Swenshuai.xi     if (mparams.magic == 0) {
2505*53ee8cc1Swenshuai.xi       mparams.magic = s;
2506*53ee8cc1Swenshuai.xi       /* Set up lock for main malloc area */
2507*53ee8cc1Swenshuai.xi     {
2508*53ee8cc1Swenshuai.xi         int ret_value=INITIAL_LOCK(&gm->mutex);
2509*53ee8cc1Swenshuai.xi         if(ret_value)printf("%s.%d lock error\n",__FUNCTION__,__LINE__);
2510*53ee8cc1Swenshuai.xi     }
2511*53ee8cc1Swenshuai.xi       gm->mflags = mparams.default_mflags;
2512*53ee8cc1Swenshuai.xi     }
2513*53ee8cc1Swenshuai.xi     RELEASE_MAGIC_INIT_LOCK();
2514*53ee8cc1Swenshuai.xi 
2515*53ee8cc1Swenshuai.xi #ifndef WIN32
2516*53ee8cc1Swenshuai.xi     mparams.page_size = 4096;    //malloc_getpagesize;
2517*53ee8cc1Swenshuai.xi     mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
2518*53ee8cc1Swenshuai.xi                            DEFAULT_GRANULARITY : mparams.page_size);
2519*53ee8cc1Swenshuai.xi #else /* WIN32 */
2520*53ee8cc1Swenshuai.xi     {
2521*53ee8cc1Swenshuai.xi       SYSTEM_INFO system_info;
2522*53ee8cc1Swenshuai.xi       GetSystemInfo(&system_info);
2523*53ee8cc1Swenshuai.xi       mparams.page_size = system_info.dwPageSize;
2524*53ee8cc1Swenshuai.xi       mparams.granularity = system_info.dwAllocationGranularity;
2525*53ee8cc1Swenshuai.xi     }
2526*53ee8cc1Swenshuai.xi #endif /* WIN32 */
2527*53ee8cc1Swenshuai.xi 
2528*53ee8cc1Swenshuai.xi     /* Sanity-check configuration:
2529*53ee8cc1Swenshuai.xi        size_t must be unsigned and as wide as pointer type.
2530*53ee8cc1Swenshuai.xi        ints must be at least 4 bytes.
2531*53ee8cc1Swenshuai.xi        alignment must be at least 8.
2532*53ee8cc1Swenshuai.xi        Alignment, min chunk size, and page size must all be powers of 2.
2533*53ee8cc1Swenshuai.xi     */
2534*53ee8cc1Swenshuai.xi     if ((sizeof(size_t) != sizeof(char*)) ||
2535*53ee8cc1Swenshuai.xi         (MAX_SIZE_T < MIN_CHUNK_SIZE)  ||
2536*53ee8cc1Swenshuai.xi         (sizeof(int) < 4)  ||
2537*53ee8cc1Swenshuai.xi         (MALLOC_ALIGNMENT < (size_t)8U) ||
2538*53ee8cc1Swenshuai.xi         ((MALLOC_ALIGNMENT    & (MALLOC_ALIGNMENT-SIZE_T_ONE))    != 0) ||
2539*53ee8cc1Swenshuai.xi         ((MCHUNK_SIZE         & (MCHUNK_SIZE-SIZE_T_ONE))         != 0) ||
2540*53ee8cc1Swenshuai.xi         ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
2541*53ee8cc1Swenshuai.xi         ((mparams.page_size   & (mparams.page_size-SIZE_T_ONE))   != 0))
2542*53ee8cc1Swenshuai.xi 	{
2543*53ee8cc1Swenshuai.xi         // @FIXME: Richard remove this to avoid link to stardard C library (libc.a)
2544*53ee8cc1Swenshuai.xi       //ABORT;
2545*53ee8cc1Swenshuai.xi 	}
2546*53ee8cc1Swenshuai.xi   }
2547*53ee8cc1Swenshuai.xi   return 0;
2548*53ee8cc1Swenshuai.xi }
2549*53ee8cc1Swenshuai.xi 
2550*53ee8cc1Swenshuai.xi /* support for mallopt */
change_mparam(int param_number,int value)2551*53ee8cc1Swenshuai.xi static int change_mparam(int param_number, int value) {
2552*53ee8cc1Swenshuai.xi   size_t val = (size_t)value;
2553*53ee8cc1Swenshuai.xi   init_mparams();
2554*53ee8cc1Swenshuai.xi   switch(param_number) {
2555*53ee8cc1Swenshuai.xi   case M_TRIM_THRESHOLD:
2556*53ee8cc1Swenshuai.xi     mparams.trim_threshold = val;
2557*53ee8cc1Swenshuai.xi     return 1;
2558*53ee8cc1Swenshuai.xi   case M_GRANULARITY:
2559*53ee8cc1Swenshuai.xi     if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
2560*53ee8cc1Swenshuai.xi       mparams.granularity = val;
2561*53ee8cc1Swenshuai.xi       return 1;
2562*53ee8cc1Swenshuai.xi     }
2563*53ee8cc1Swenshuai.xi     else
2564*53ee8cc1Swenshuai.xi       return 0;
2565*53ee8cc1Swenshuai.xi   case M_MMAP_THRESHOLD:
2566*53ee8cc1Swenshuai.xi     mparams.mmap_threshold = val;
2567*53ee8cc1Swenshuai.xi     return 1;
2568*53ee8cc1Swenshuai.xi   default:
2569*53ee8cc1Swenshuai.xi     return 0;
2570*53ee8cc1Swenshuai.xi   }
2571*53ee8cc1Swenshuai.xi }
2572*53ee8cc1Swenshuai.xi 
2573*53ee8cc1Swenshuai.xi #if DEBUG
2574*53ee8cc1Swenshuai.xi /* ------------------------- Debugging Support --------------------------- */
2575*53ee8cc1Swenshuai.xi 
2576*53ee8cc1Swenshuai.xi /* Check properties of any chunk, whether free, inuse, mmapped etc  */
do_check_any_chunk(mstate m,mchunkptr p)2577*53ee8cc1Swenshuai.xi static void do_check_any_chunk(mstate m, mchunkptr p) {
2578*53ee8cc1Swenshuai.xi   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2579*53ee8cc1Swenshuai.xi   assert(ok_address(m, p));
2580*53ee8cc1Swenshuai.xi }
2581*53ee8cc1Swenshuai.xi 
2582*53ee8cc1Swenshuai.xi /* Check properties of top chunk */
do_check_top_chunk(mstate m,mchunkptr p)2583*53ee8cc1Swenshuai.xi static void do_check_top_chunk(mstate m, mchunkptr p) {
2584*53ee8cc1Swenshuai.xi   msegmentptr sp = segment_holding(m, (char*)p);
2585*53ee8cc1Swenshuai.xi   size_t  sz = chunksize(p);
2586*53ee8cc1Swenshuai.xi   assert(sp != 0);
2587*53ee8cc1Swenshuai.xi   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2588*53ee8cc1Swenshuai.xi   assert(ok_address(m, p));
2589*53ee8cc1Swenshuai.xi   assert(sz == m->topsize);
2590*53ee8cc1Swenshuai.xi   assert(sz > 0);
2591*53ee8cc1Swenshuai.xi   assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
2592*53ee8cc1Swenshuai.xi   assert(pinuse(p));
2593*53ee8cc1Swenshuai.xi   assert(!next_pinuse(p));
2594*53ee8cc1Swenshuai.xi }
2595*53ee8cc1Swenshuai.xi 
2596*53ee8cc1Swenshuai.xi /* Check properties of (inuse) mmapped chunks */
do_check_mmapped_chunk(mstate m,mchunkptr p)2597*53ee8cc1Swenshuai.xi static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
2598*53ee8cc1Swenshuai.xi   size_t  sz = chunksize(p);
2599*53ee8cc1Swenshuai.xi   size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
2600*53ee8cc1Swenshuai.xi   assert(is_mmapped(p));
2601*53ee8cc1Swenshuai.xi   assert(use_mmap(m));
2602*53ee8cc1Swenshuai.xi   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2603*53ee8cc1Swenshuai.xi   assert(ok_address(m, p));
2604*53ee8cc1Swenshuai.xi   assert(!is_small(sz));
2605*53ee8cc1Swenshuai.xi   assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
2606*53ee8cc1Swenshuai.xi   assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
2607*53ee8cc1Swenshuai.xi   assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
2608*53ee8cc1Swenshuai.xi }
2609*53ee8cc1Swenshuai.xi 
2610*53ee8cc1Swenshuai.xi /* Check properties of inuse chunks */
do_check_inuse_chunk(mstate m,mchunkptr p)2611*53ee8cc1Swenshuai.xi static void do_check_inuse_chunk(mstate m, mchunkptr p) {
2612*53ee8cc1Swenshuai.xi   do_check_any_chunk(m, p);
2613*53ee8cc1Swenshuai.xi   assert(cinuse(p));
2614*53ee8cc1Swenshuai.xi   assert(next_pinuse(p));
2615*53ee8cc1Swenshuai.xi   /* If not pinuse and not mmapped, previous chunk has OK offset */
2616*53ee8cc1Swenshuai.xi   assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
2617*53ee8cc1Swenshuai.xi   if (is_mmapped(p))
2618*53ee8cc1Swenshuai.xi     do_check_mmapped_chunk(m, p);
2619*53ee8cc1Swenshuai.xi }
2620*53ee8cc1Swenshuai.xi 
2621*53ee8cc1Swenshuai.xi /* Check properties of free chunks */
do_check_free_chunk(mstate m,mchunkptr p)2622*53ee8cc1Swenshuai.xi static void do_check_free_chunk(mstate m, mchunkptr p) {
2623*53ee8cc1Swenshuai.xi   size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
2624*53ee8cc1Swenshuai.xi   mchunkptr next = chunk_plus_offset(p, sz);
2625*53ee8cc1Swenshuai.xi   do_check_any_chunk(m, p);
2626*53ee8cc1Swenshuai.xi   assert(!cinuse(p));
2627*53ee8cc1Swenshuai.xi   assert(!next_pinuse(p));
2628*53ee8cc1Swenshuai.xi   assert (!is_mmapped(p));
2629*53ee8cc1Swenshuai.xi   if (p != m->dv && p != m->top) {
2630*53ee8cc1Swenshuai.xi     if (sz >= MIN_CHUNK_SIZE) {
2631*53ee8cc1Swenshuai.xi       assert((sz & CHUNK_ALIGN_MASK) == 0);
2632*53ee8cc1Swenshuai.xi       assert(is_aligned(chunk2mem(p)));
2633*53ee8cc1Swenshuai.xi       assert(next->prev_foot == sz);
2634*53ee8cc1Swenshuai.xi       assert(pinuse(p));
2635*53ee8cc1Swenshuai.xi       assert (next == m->top || cinuse(next));
2636*53ee8cc1Swenshuai.xi       assert(p->fd->bk == p);
2637*53ee8cc1Swenshuai.xi       assert(p->bk->fd == p);
2638*53ee8cc1Swenshuai.xi     }
2639*53ee8cc1Swenshuai.xi     else  /* markers are always of size SIZE_T_SIZE */
2640*53ee8cc1Swenshuai.xi       assert(sz == SIZE_T_SIZE);
2641*53ee8cc1Swenshuai.xi   }
2642*53ee8cc1Swenshuai.xi }
2643*53ee8cc1Swenshuai.xi 
2644*53ee8cc1Swenshuai.xi /* Check properties of malloced chunks at the point they are malloced */
do_check_malloced_chunk(mstate m,void * mem,size_t s)2645*53ee8cc1Swenshuai.xi static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
2646*53ee8cc1Swenshuai.xi   if (mem != 0) {
2647*53ee8cc1Swenshuai.xi     mchunkptr p = mem2chunk(mem);
2648*53ee8cc1Swenshuai.xi     size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
2649*53ee8cc1Swenshuai.xi     do_check_inuse_chunk(m, p);
2650*53ee8cc1Swenshuai.xi     assert((sz & CHUNK_ALIGN_MASK) == 0);
2651*53ee8cc1Swenshuai.xi     assert(sz >= MIN_CHUNK_SIZE);
2652*53ee8cc1Swenshuai.xi     assert(sz >= s);
2653*53ee8cc1Swenshuai.xi     /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
2654*53ee8cc1Swenshuai.xi     assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
2655*53ee8cc1Swenshuai.xi   }
2656*53ee8cc1Swenshuai.xi }
2657*53ee8cc1Swenshuai.xi 
2658*53ee8cc1Swenshuai.xi /* Check a tree and its subtrees.  */
do_check_tree(mstate m,tchunkptr t)2659*53ee8cc1Swenshuai.xi static void do_check_tree(mstate m, tchunkptr t) {
2660*53ee8cc1Swenshuai.xi   tchunkptr head = 0;
2661*53ee8cc1Swenshuai.xi   tchunkptr u = t;
2662*53ee8cc1Swenshuai.xi   bindex_t tindex = t->index;
2663*53ee8cc1Swenshuai.xi   size_t tsize = chunksize(t);
2664*53ee8cc1Swenshuai.xi   bindex_t idx;
2665*53ee8cc1Swenshuai.xi   compute_tree_index(tsize, idx);
2666*53ee8cc1Swenshuai.xi   assert(tindex == idx);
2667*53ee8cc1Swenshuai.xi   assert(tsize >= MIN_LARGE_SIZE);
2668*53ee8cc1Swenshuai.xi   assert(tsize >= minsize_for_tree_index(idx));
2669*53ee8cc1Swenshuai.xi   assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
2670*53ee8cc1Swenshuai.xi 
2671*53ee8cc1Swenshuai.xi   do { /* traverse through chain of same-sized nodes */
2672*53ee8cc1Swenshuai.xi     do_check_any_chunk(m, ((mchunkptr)u));
2673*53ee8cc1Swenshuai.xi     assert(u->index == tindex);
2674*53ee8cc1Swenshuai.xi     assert(chunksize(u) == tsize);
2675*53ee8cc1Swenshuai.xi     assert(!cinuse(u));
2676*53ee8cc1Swenshuai.xi     assert(!next_pinuse(u));
2677*53ee8cc1Swenshuai.xi     assert(u->fd->bk == u);
2678*53ee8cc1Swenshuai.xi     assert(u->bk->fd == u);
2679*53ee8cc1Swenshuai.xi     if (u->parent == 0) {
2680*53ee8cc1Swenshuai.xi       assert(u->child[0] == 0);
2681*53ee8cc1Swenshuai.xi       assert(u->child[1] == 0);
2682*53ee8cc1Swenshuai.xi     }
2683*53ee8cc1Swenshuai.xi     else {
2684*53ee8cc1Swenshuai.xi       assert(head == 0); /* only one node on chain has parent */
2685*53ee8cc1Swenshuai.xi       head = u;
2686*53ee8cc1Swenshuai.xi       assert(u->parent != u);
2687*53ee8cc1Swenshuai.xi       assert (u->parent->child[0] == u ||
2688*53ee8cc1Swenshuai.xi               u->parent->child[1] == u ||
2689*53ee8cc1Swenshuai.xi               *((tbinptr*)(u->parent)) == u);
2690*53ee8cc1Swenshuai.xi       if (u->child[0] != 0) {
2691*53ee8cc1Swenshuai.xi         assert(u->child[0]->parent == u);
2692*53ee8cc1Swenshuai.xi         assert(u->child[0] != u);
2693*53ee8cc1Swenshuai.xi         do_check_tree(m, u->child[0]);
2694*53ee8cc1Swenshuai.xi       }
2695*53ee8cc1Swenshuai.xi       if (u->child[1] != 0) {
2696*53ee8cc1Swenshuai.xi         assert(u->child[1]->parent == u);
2697*53ee8cc1Swenshuai.xi         assert(u->child[1] != u);
2698*53ee8cc1Swenshuai.xi         do_check_tree(m, u->child[1]);
2699*53ee8cc1Swenshuai.xi       }
2700*53ee8cc1Swenshuai.xi       if (u->child[0] != 0 && u->child[1] != 0) {
2701*53ee8cc1Swenshuai.xi         assert(chunksize(u->child[0]) < chunksize(u->child[1]));
2702*53ee8cc1Swenshuai.xi       }
2703*53ee8cc1Swenshuai.xi     }
2704*53ee8cc1Swenshuai.xi     u = u->fd;
2705*53ee8cc1Swenshuai.xi   } while (u != t);
2706*53ee8cc1Swenshuai.xi   assert(head != 0);
2707*53ee8cc1Swenshuai.xi }
2708*53ee8cc1Swenshuai.xi 
2709*53ee8cc1Swenshuai.xi /*  Check all the chunks in a treebin.  */
do_check_treebin(mstate m,bindex_t i)2710*53ee8cc1Swenshuai.xi static void do_check_treebin(mstate m, bindex_t i) {
2711*53ee8cc1Swenshuai.xi   tbinptr* tb = treebin_at(m, i);
2712*53ee8cc1Swenshuai.xi   tchunkptr t = *tb;
2713*53ee8cc1Swenshuai.xi   int empty = (m->treemap & (1U << i)) == 0;
2714*53ee8cc1Swenshuai.xi   if (t == 0)
2715*53ee8cc1Swenshuai.xi     assert(empty);
2716*53ee8cc1Swenshuai.xi   if (!empty)
2717*53ee8cc1Swenshuai.xi     do_check_tree(m, t);
2718*53ee8cc1Swenshuai.xi }
2719*53ee8cc1Swenshuai.xi 
2720*53ee8cc1Swenshuai.xi /*  Check all the chunks in a smallbin.  */
do_check_smallbin(mstate m,bindex_t i)2721*53ee8cc1Swenshuai.xi static void do_check_smallbin(mstate m, bindex_t i) {
2722*53ee8cc1Swenshuai.xi   sbinptr b = smallbin_at(m, i);
2723*53ee8cc1Swenshuai.xi   mchunkptr p = b->bk;
2724*53ee8cc1Swenshuai.xi   unsigned int empty = (m->smallmap & (1U << i)) == 0;
2725*53ee8cc1Swenshuai.xi   if (p == b)
2726*53ee8cc1Swenshuai.xi     assert(empty);
2727*53ee8cc1Swenshuai.xi   if (!empty) {
2728*53ee8cc1Swenshuai.xi     for (; p != b; p = p->bk) {
2729*53ee8cc1Swenshuai.xi       size_t size = chunksize(p);
2730*53ee8cc1Swenshuai.xi       mchunkptr q;
2731*53ee8cc1Swenshuai.xi       /* each chunk claims to be free */
2732*53ee8cc1Swenshuai.xi       do_check_free_chunk(m, p);
2733*53ee8cc1Swenshuai.xi       /* chunk belongs in bin */
2734*53ee8cc1Swenshuai.xi       assert(small_index(size) == i);
2735*53ee8cc1Swenshuai.xi       assert(p->bk == b || chunksize(p->bk) == chunksize(p));
2736*53ee8cc1Swenshuai.xi       /* chunk is followed by an inuse chunk */
2737*53ee8cc1Swenshuai.xi       q = next_chunk(p);
2738*53ee8cc1Swenshuai.xi       if (q->head != FENCEPOST_HEAD)
2739*53ee8cc1Swenshuai.xi         do_check_inuse_chunk(m, q);
2740*53ee8cc1Swenshuai.xi     }
2741*53ee8cc1Swenshuai.xi   }
2742*53ee8cc1Swenshuai.xi }
2743*53ee8cc1Swenshuai.xi 
2744*53ee8cc1Swenshuai.xi /* Find x in a bin. Used in other check functions. */
bin_find(mstate m,mchunkptr x)2745*53ee8cc1Swenshuai.xi static int bin_find(mstate m, mchunkptr x) {
2746*53ee8cc1Swenshuai.xi   size_t size = chunksize(x);
2747*53ee8cc1Swenshuai.xi   if (is_small(size)) {
2748*53ee8cc1Swenshuai.xi     bindex_t sidx = small_index(size);
2749*53ee8cc1Swenshuai.xi     sbinptr b = smallbin_at(m, sidx);
2750*53ee8cc1Swenshuai.xi     if (smallmap_is_marked(m, sidx)) {
2751*53ee8cc1Swenshuai.xi       mchunkptr p = b;
2752*53ee8cc1Swenshuai.xi       do {
2753*53ee8cc1Swenshuai.xi         if (p == x)
2754*53ee8cc1Swenshuai.xi           return 1;
2755*53ee8cc1Swenshuai.xi       } while ((p = p->fd) != b);
2756*53ee8cc1Swenshuai.xi     }
2757*53ee8cc1Swenshuai.xi   }
2758*53ee8cc1Swenshuai.xi   else {
2759*53ee8cc1Swenshuai.xi     bindex_t tidx;
2760*53ee8cc1Swenshuai.xi     compute_tree_index(size, tidx);
2761*53ee8cc1Swenshuai.xi     if (treemap_is_marked(m, tidx)) {
2762*53ee8cc1Swenshuai.xi       tchunkptr t = *treebin_at(m, tidx);
2763*53ee8cc1Swenshuai.xi       size_t sizebits = size << leftshift_for_tree_index(tidx);
2764*53ee8cc1Swenshuai.xi       while (t != 0 && chunksize(t) != size) {
2765*53ee8cc1Swenshuai.xi         t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
2766*53ee8cc1Swenshuai.xi         sizebits <<= 1;
2767*53ee8cc1Swenshuai.xi       }
2768*53ee8cc1Swenshuai.xi       if (t != 0) {
2769*53ee8cc1Swenshuai.xi         tchunkptr u = t;
2770*53ee8cc1Swenshuai.xi         do {
2771*53ee8cc1Swenshuai.xi           if (u == (tchunkptr)x)
2772*53ee8cc1Swenshuai.xi             return 1;
2773*53ee8cc1Swenshuai.xi         } while ((u = u->fd) != t);
2774*53ee8cc1Swenshuai.xi       }
2775*53ee8cc1Swenshuai.xi     }
2776*53ee8cc1Swenshuai.xi   }
2777*53ee8cc1Swenshuai.xi   return 0;
2778*53ee8cc1Swenshuai.xi }
2779*53ee8cc1Swenshuai.xi 
2780*53ee8cc1Swenshuai.xi /* Traverse each chunk and check it; return total */
traverse_and_check(mstate m)2781*53ee8cc1Swenshuai.xi static size_t traverse_and_check(mstate m) {
2782*53ee8cc1Swenshuai.xi   size_t sum = 0;
2783*53ee8cc1Swenshuai.xi   if (is_initialized(m)) {
2784*53ee8cc1Swenshuai.xi     msegmentptr s = &m->seg;
2785*53ee8cc1Swenshuai.xi     sum += m->topsize + TOP_FOOT_SIZE;
2786*53ee8cc1Swenshuai.xi     while (s != 0) {
2787*53ee8cc1Swenshuai.xi       mchunkptr q = align_as_chunk(s->base);
2788*53ee8cc1Swenshuai.xi       mchunkptr lastq = 0;
2789*53ee8cc1Swenshuai.xi       assert(pinuse(q));
2790*53ee8cc1Swenshuai.xi       while (segment_holds(s, q) &&
2791*53ee8cc1Swenshuai.xi              q != m->top && q->head != FENCEPOST_HEAD) {
2792*53ee8cc1Swenshuai.xi         sum += chunksize(q);
2793*53ee8cc1Swenshuai.xi         if (cinuse(q)) {
2794*53ee8cc1Swenshuai.xi           assert(!bin_find(m, q));
2795*53ee8cc1Swenshuai.xi           do_check_inuse_chunk(m, q);
2796*53ee8cc1Swenshuai.xi         }
2797*53ee8cc1Swenshuai.xi         else {
2798*53ee8cc1Swenshuai.xi           assert(q == m->dv || bin_find(m, q));
2799*53ee8cc1Swenshuai.xi           assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
2800*53ee8cc1Swenshuai.xi           do_check_free_chunk(m, q);
2801*53ee8cc1Swenshuai.xi         }
2802*53ee8cc1Swenshuai.xi         lastq = q;
2803*53ee8cc1Swenshuai.xi         q = next_chunk(q);
2804*53ee8cc1Swenshuai.xi       }
2805*53ee8cc1Swenshuai.xi       s = s->next;
2806*53ee8cc1Swenshuai.xi     }
2807*53ee8cc1Swenshuai.xi   }
2808*53ee8cc1Swenshuai.xi   return sum;
2809*53ee8cc1Swenshuai.xi }
2810*53ee8cc1Swenshuai.xi 
2811*53ee8cc1Swenshuai.xi /* Check all properties of malloc_state. */
do_check_malloc_state(mstate m)2812*53ee8cc1Swenshuai.xi static void do_check_malloc_state(mstate m) {
2813*53ee8cc1Swenshuai.xi   bindex_t i;
2814*53ee8cc1Swenshuai.xi   size_t total;
2815*53ee8cc1Swenshuai.xi   /* check bins */
2816*53ee8cc1Swenshuai.xi   for (i = 0; i < NSMALLBINS; ++i)
2817*53ee8cc1Swenshuai.xi     do_check_smallbin(m, i);
2818*53ee8cc1Swenshuai.xi   for (i = 0; i < NTREEBINS; ++i)
2819*53ee8cc1Swenshuai.xi     do_check_treebin(m, i);
2820*53ee8cc1Swenshuai.xi 
2821*53ee8cc1Swenshuai.xi   if (m->dvsize != 0) { /* check dv chunk */
2822*53ee8cc1Swenshuai.xi     do_check_any_chunk(m, m->dv);
2823*53ee8cc1Swenshuai.xi     assert(m->dvsize == chunksize(m->dv));
2824*53ee8cc1Swenshuai.xi     assert(m->dvsize >= MIN_CHUNK_SIZE);
2825*53ee8cc1Swenshuai.xi     assert(bin_find(m, m->dv) == 0);
2826*53ee8cc1Swenshuai.xi   }
2827*53ee8cc1Swenshuai.xi 
2828*53ee8cc1Swenshuai.xi   if (m->top != 0) {   /* check top chunk */
2829*53ee8cc1Swenshuai.xi     do_check_top_chunk(m, m->top);
2830*53ee8cc1Swenshuai.xi     assert(m->topsize == chunksize(m->top));
2831*53ee8cc1Swenshuai.xi     assert(m->topsize > 0);
2832*53ee8cc1Swenshuai.xi     assert(bin_find(m, m->top) == 0);
2833*53ee8cc1Swenshuai.xi   }
2834*53ee8cc1Swenshuai.xi 
2835*53ee8cc1Swenshuai.xi   total = traverse_and_check(m);
2836*53ee8cc1Swenshuai.xi   assert(total <= m->footprint);
2837*53ee8cc1Swenshuai.xi   assert(m->footprint <= m->max_footprint);
2838*53ee8cc1Swenshuai.xi }
2839*53ee8cc1Swenshuai.xi #endif /* DEBUG */
2840*53ee8cc1Swenshuai.xi 
2841*53ee8cc1Swenshuai.xi /* ----------------------------- statistics ------------------------------ */
2842*53ee8cc1Swenshuai.xi 
2843*53ee8cc1Swenshuai.xi #if !NO_MALLINFO
internal_mallinfo(mstate m,size_t * plargest_free)2844*53ee8cc1Swenshuai.xi static struct mallinfo internal_mallinfo(mstate m, size_t *plargest_free) {
2845*53ee8cc1Swenshuai.xi   struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
2846*53ee8cc1Swenshuai.xi   if (!PREACTION(m)) {
2847*53ee8cc1Swenshuai.xi     check_malloc_state(m);
2848*53ee8cc1Swenshuai.xi     if (is_initialized(m)) {
2849*53ee8cc1Swenshuai.xi       size_t nfree = SIZE_T_ONE; /* top always free */
2850*53ee8cc1Swenshuai.xi       size_t mfree = m->topsize + TOP_FOOT_SIZE;
2851*53ee8cc1Swenshuai.xi       size_t sum = mfree;
2852*53ee8cc1Swenshuai.xi       *plargest_free = mfree;
2853*53ee8cc1Swenshuai.xi       msegmentptr s = &m->seg;
2854*53ee8cc1Swenshuai.xi       while (s != 0) {
2855*53ee8cc1Swenshuai.xi         mchunkptr q = align_as_chunk(s->base);
2856*53ee8cc1Swenshuai.xi         while (segment_holds(s, q) &&
2857*53ee8cc1Swenshuai.xi                q != m->top && q->head != FENCEPOST_HEAD) {
2858*53ee8cc1Swenshuai.xi           size_t sz = chunksize(q);
2859*53ee8cc1Swenshuai.xi           sum += sz;
2860*53ee8cc1Swenshuai.xi           if (!cinuse(q)) {
2861*53ee8cc1Swenshuai.xi             mfree += sz;
2862*53ee8cc1Swenshuai.xi             if (*plargest_free < sz)
2863*53ee8cc1Swenshuai.xi                 *plargest_free = sz;
2864*53ee8cc1Swenshuai.xi             ++nfree;
2865*53ee8cc1Swenshuai.xi           }
2866*53ee8cc1Swenshuai.xi           q = next_chunk(q);
2867*53ee8cc1Swenshuai.xi         }
2868*53ee8cc1Swenshuai.xi         s = s->next;
2869*53ee8cc1Swenshuai.xi       }
2870*53ee8cc1Swenshuai.xi 
2871*53ee8cc1Swenshuai.xi       nm.arena    = sum;
2872*53ee8cc1Swenshuai.xi       nm.ordblks  = nfree;
2873*53ee8cc1Swenshuai.xi       nm.hblkhd   = m->footprint - sum;
2874*53ee8cc1Swenshuai.xi       nm.usmblks  = m->max_footprint;
2875*53ee8cc1Swenshuai.xi       nm.uordblks = m->footprint - mfree;
2876*53ee8cc1Swenshuai.xi       nm.fordblks = mfree;
2877*53ee8cc1Swenshuai.xi       nm.keepcost = m->topsize;
2878*53ee8cc1Swenshuai.xi     }
2879*53ee8cc1Swenshuai.xi 
2880*53ee8cc1Swenshuai.xi     POSTACTION(m);
2881*53ee8cc1Swenshuai.xi   }
2882*53ee8cc1Swenshuai.xi   return nm;
2883*53ee8cc1Swenshuai.xi }
2884*53ee8cc1Swenshuai.xi #endif /* !NO_MALLINFO */
2885*53ee8cc1Swenshuai.xi 
internal_malloc_stats(mstate m)2886*53ee8cc1Swenshuai.xi static void internal_malloc_stats(mstate m) {
2887*53ee8cc1Swenshuai.xi   if (!PREACTION(m)) {
2888*53ee8cc1Swenshuai.xi     size_t maxfp = 0;
2889*53ee8cc1Swenshuai.xi     size_t fp = 0;
2890*53ee8cc1Swenshuai.xi     size_t used = 0;
2891*53ee8cc1Swenshuai.xi     check_malloc_state(m);
2892*53ee8cc1Swenshuai.xi     if (is_initialized(m)) {
2893*53ee8cc1Swenshuai.xi       msegmentptr s = &m->seg;
2894*53ee8cc1Swenshuai.xi       maxfp = m->max_footprint;
2895*53ee8cc1Swenshuai.xi       fp = m->footprint;
2896*53ee8cc1Swenshuai.xi       used = fp - (m->topsize + TOP_FOOT_SIZE);
2897*53ee8cc1Swenshuai.xi 
2898*53ee8cc1Swenshuai.xi       while (s != 0) {
2899*53ee8cc1Swenshuai.xi         mchunkptr q = align_as_chunk(s->base);
2900*53ee8cc1Swenshuai.xi         while (segment_holds(s, q) &&
2901*53ee8cc1Swenshuai.xi                q != m->top && q->head != FENCEPOST_HEAD) {
2902*53ee8cc1Swenshuai.xi           if (!cinuse(q))
2903*53ee8cc1Swenshuai.xi             used -= chunksize(q);
2904*53ee8cc1Swenshuai.xi           q = next_chunk(q);
2905*53ee8cc1Swenshuai.xi         }
2906*53ee8cc1Swenshuai.xi         s = s->next;
2907*53ee8cc1Swenshuai.xi       }
2908*53ee8cc1Swenshuai.xi     }
2909*53ee8cc1Swenshuai.xi 
2910*53ee8cc1Swenshuai.xi    // fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
2911*53ee8cc1Swenshuai.xi    // fprintf(stderr, "system bytes     = %10lu\n", (unsigned long)(fp));
2912*53ee8cc1Swenshuai.xi    // fprintf(stderr, "in use bytes     = %10lu\n", (unsigned long)(used));
2913*53ee8cc1Swenshuai.xi 
2914*53ee8cc1Swenshuai.xi     POSTACTION(m);
2915*53ee8cc1Swenshuai.xi   }
2916*53ee8cc1Swenshuai.xi }
2917*53ee8cc1Swenshuai.xi 
2918*53ee8cc1Swenshuai.xi /* ----------------------- Operations on smallbins ----------------------- */
2919*53ee8cc1Swenshuai.xi 
2920*53ee8cc1Swenshuai.xi /*
2921*53ee8cc1Swenshuai.xi   Various forms of linking and unlinking are defined as macros.  Even
2922*53ee8cc1Swenshuai.xi   the ones for trees, which are very long but have very short typical
2923*53ee8cc1Swenshuai.xi   paths.  This is ugly but reduces reliance on inlining support of
2924*53ee8cc1Swenshuai.xi   compilers.
2925*53ee8cc1Swenshuai.xi */
2926*53ee8cc1Swenshuai.xi 
2927*53ee8cc1Swenshuai.xi /* Link a free chunk into a smallbin  */
2928*53ee8cc1Swenshuai.xi #define insert_small_chunk(M, P, S) {\
2929*53ee8cc1Swenshuai.xi   bindex_t I  = small_index(S);\
2930*53ee8cc1Swenshuai.xi   mchunkptr B = smallbin_at(M, I);\
2931*53ee8cc1Swenshuai.xi   mchunkptr F = B;\
2932*53ee8cc1Swenshuai.xi   assert(S >= MIN_CHUNK_SIZE);\
2933*53ee8cc1Swenshuai.xi   if (!smallmap_is_marked(M, I))\
2934*53ee8cc1Swenshuai.xi     mark_smallmap(M, I);\
2935*53ee8cc1Swenshuai.xi   else if (RTCHECK(ok_address(M, B->fd)))\
2936*53ee8cc1Swenshuai.xi     F = B->fd;\
2937*53ee8cc1Swenshuai.xi   else {\
2938*53ee8cc1Swenshuai.xi     CORRUPTION_ERROR_ACTION(M);\
2939*53ee8cc1Swenshuai.xi   }\
2940*53ee8cc1Swenshuai.xi   B->fd = P;\
2941*53ee8cc1Swenshuai.xi   F->bk = P;\
2942*53ee8cc1Swenshuai.xi   P->fd = F;\
2943*53ee8cc1Swenshuai.xi   P->bk = B;\
2944*53ee8cc1Swenshuai.xi }
2945*53ee8cc1Swenshuai.xi 
2946*53ee8cc1Swenshuai.xi /* Unlink a chunk from a smallbin  */
2947*53ee8cc1Swenshuai.xi #define unlink_small_chunk(M, P, S) {\
2948*53ee8cc1Swenshuai.xi   mchunkptr F = P->fd;\
2949*53ee8cc1Swenshuai.xi   mchunkptr B = P->bk;\
2950*53ee8cc1Swenshuai.xi   bindex_t I = small_index(S);\
2951*53ee8cc1Swenshuai.xi   assert(P != B);\
2952*53ee8cc1Swenshuai.xi   assert(P != F);\
2953*53ee8cc1Swenshuai.xi   assert(chunksize(P) == small_index2size(I));\
2954*53ee8cc1Swenshuai.xi   if (F == B)\
2955*53ee8cc1Swenshuai.xi     clear_smallmap(M, I);\
2956*53ee8cc1Swenshuai.xi   else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
2957*53ee8cc1Swenshuai.xi                    (B == smallbin_at(M,I) || ok_address(M, B)))) {\
2958*53ee8cc1Swenshuai.xi     F->bk = B;\
2959*53ee8cc1Swenshuai.xi     B->fd = F;\
2960*53ee8cc1Swenshuai.xi   }\
2961*53ee8cc1Swenshuai.xi   else {\
2962*53ee8cc1Swenshuai.xi     CORRUPTION_ERROR_ACTION(M);\
2963*53ee8cc1Swenshuai.xi   }\
2964*53ee8cc1Swenshuai.xi }
2965*53ee8cc1Swenshuai.xi 
2966*53ee8cc1Swenshuai.xi /* Unlink the first chunk from a smallbin */
2967*53ee8cc1Swenshuai.xi #define unlink_first_small_chunk(M, B, P, I) {\
2968*53ee8cc1Swenshuai.xi   mchunkptr F = P->fd;\
2969*53ee8cc1Swenshuai.xi   assert(P != B);\
2970*53ee8cc1Swenshuai.xi   assert(P != F);\
2971*53ee8cc1Swenshuai.xi   assert(chunksize(P) == small_index2size(I));\
2972*53ee8cc1Swenshuai.xi   if (B == F)\
2973*53ee8cc1Swenshuai.xi     clear_smallmap(M, I);\
2974*53ee8cc1Swenshuai.xi   else if (RTCHECK(ok_address(M, F))) {\
2975*53ee8cc1Swenshuai.xi     B->fd = F;\
2976*53ee8cc1Swenshuai.xi     F->bk = B;\
2977*53ee8cc1Swenshuai.xi   }\
2978*53ee8cc1Swenshuai.xi   else {\
2979*53ee8cc1Swenshuai.xi     CORRUPTION_ERROR_ACTION(M);\
2980*53ee8cc1Swenshuai.xi   }\
2981*53ee8cc1Swenshuai.xi }
2982*53ee8cc1Swenshuai.xi 
2983*53ee8cc1Swenshuai.xi /* Replace dv node, binning the old one */
2984*53ee8cc1Swenshuai.xi /* Used only when dvsize known to be small */
2985*53ee8cc1Swenshuai.xi #define replace_dv(M, P, S) {\
2986*53ee8cc1Swenshuai.xi   size_t DVS = M->dvsize;\
2987*53ee8cc1Swenshuai.xi   if (DVS != 0) {\
2988*53ee8cc1Swenshuai.xi     mchunkptr DV = M->dv;\
2989*53ee8cc1Swenshuai.xi     assert(is_small(DVS));\
2990*53ee8cc1Swenshuai.xi     insert_small_chunk(M, DV, DVS);\
2991*53ee8cc1Swenshuai.xi   }\
2992*53ee8cc1Swenshuai.xi   M->dvsize = S;\
2993*53ee8cc1Swenshuai.xi   M->dv = P;\
2994*53ee8cc1Swenshuai.xi }
2995*53ee8cc1Swenshuai.xi 
2996*53ee8cc1Swenshuai.xi /* ------------------------- Operations on trees ------------------------- */
2997*53ee8cc1Swenshuai.xi 
2998*53ee8cc1Swenshuai.xi /* Insert chunk into tree */
2999*53ee8cc1Swenshuai.xi #define insert_large_chunk(M, X, S) {\
3000*53ee8cc1Swenshuai.xi   tbinptr* H;\
3001*53ee8cc1Swenshuai.xi   bindex_t I;\
3002*53ee8cc1Swenshuai.xi   compute_tree_index(S, I);\
3003*53ee8cc1Swenshuai.xi   H = treebin_at(M, I);\
3004*53ee8cc1Swenshuai.xi   X->index = I;\
3005*53ee8cc1Swenshuai.xi   X->child[0] = X->child[1] = 0;\
3006*53ee8cc1Swenshuai.xi   if (!treemap_is_marked(M, I)) {\
3007*53ee8cc1Swenshuai.xi     mark_treemap(M, I);\
3008*53ee8cc1Swenshuai.xi     *H = X;\
3009*53ee8cc1Swenshuai.xi     X->parent = (tchunkptr)H;\
3010*53ee8cc1Swenshuai.xi     X->fd = X->bk = X;\
3011*53ee8cc1Swenshuai.xi   }\
3012*53ee8cc1Swenshuai.xi   else {\
3013*53ee8cc1Swenshuai.xi     tchunkptr T = *H;\
3014*53ee8cc1Swenshuai.xi     size_t K = S << leftshift_for_tree_index(I);\
3015*53ee8cc1Swenshuai.xi     for (;;) {\
3016*53ee8cc1Swenshuai.xi       if (chunksize(T) != S) {\
3017*53ee8cc1Swenshuai.xi         tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
3018*53ee8cc1Swenshuai.xi         K <<= 1;\
3019*53ee8cc1Swenshuai.xi         if (*C != 0)\
3020*53ee8cc1Swenshuai.xi           T = *C;\
3021*53ee8cc1Swenshuai.xi         else if (RTCHECK(ok_address(M, C))) {\
3022*53ee8cc1Swenshuai.xi           *C = X;\
3023*53ee8cc1Swenshuai.xi           X->parent = T;\
3024*53ee8cc1Swenshuai.xi           X->fd = X->bk = X;\
3025*53ee8cc1Swenshuai.xi           break;\
3026*53ee8cc1Swenshuai.xi         }\
3027*53ee8cc1Swenshuai.xi         else {\
3028*53ee8cc1Swenshuai.xi           CORRUPTION_ERROR_ACTION(M);\
3029*53ee8cc1Swenshuai.xi           break;\
3030*53ee8cc1Swenshuai.xi         }\
3031*53ee8cc1Swenshuai.xi       }\
3032*53ee8cc1Swenshuai.xi       else {\
3033*53ee8cc1Swenshuai.xi         tchunkptr F = T->fd;\
3034*53ee8cc1Swenshuai.xi         if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
3035*53ee8cc1Swenshuai.xi           T->fd = F->bk = X;\
3036*53ee8cc1Swenshuai.xi           X->fd = F;\
3037*53ee8cc1Swenshuai.xi           X->bk = T;\
3038*53ee8cc1Swenshuai.xi           X->parent = 0;\
3039*53ee8cc1Swenshuai.xi           break;\
3040*53ee8cc1Swenshuai.xi         }\
3041*53ee8cc1Swenshuai.xi         else {\
3042*53ee8cc1Swenshuai.xi           CORRUPTION_ERROR_ACTION(M);\
3043*53ee8cc1Swenshuai.xi           break;\
3044*53ee8cc1Swenshuai.xi         }\
3045*53ee8cc1Swenshuai.xi       }\
3046*53ee8cc1Swenshuai.xi     }\
3047*53ee8cc1Swenshuai.xi   }\
3048*53ee8cc1Swenshuai.xi }
3049*53ee8cc1Swenshuai.xi 
3050*53ee8cc1Swenshuai.xi /*
3051*53ee8cc1Swenshuai.xi   Unlink steps:
3052*53ee8cc1Swenshuai.xi 
3053*53ee8cc1Swenshuai.xi   1. If x is a chained node, unlink it from its same-sized fd/bk links
3054*53ee8cc1Swenshuai.xi      and choose its bk node as its replacement.
3055*53ee8cc1Swenshuai.xi   2. If x was the last node of its size, but not a leaf node, it must
3056*53ee8cc1Swenshuai.xi      be replaced with a leaf node (not merely one with an open left or
3057*53ee8cc1Swenshuai.xi      right), to make sure that lefts and rights of descendents
3058*53ee8cc1Swenshuai.xi      correspond properly to bit masks.  We use the rightmost descendent
3059*53ee8cc1Swenshuai.xi      of x.  We could use any other leaf, but this is easy to locate and
3060*53ee8cc1Swenshuai.xi      tends to counteract removal of leftmosts elsewhere, and so keeps
3061*53ee8cc1Swenshuai.xi      paths shorter than minimally guaranteed.  This doesn't loop much
3062*53ee8cc1Swenshuai.xi      because on average a node in a tree is near the bottom.
3063*53ee8cc1Swenshuai.xi   3. If x is the base of a chain (i.e., has parent links) relink
3064*53ee8cc1Swenshuai.xi      x's parent and children to x's replacement (or null if none).
3065*53ee8cc1Swenshuai.xi */
3066*53ee8cc1Swenshuai.xi 
3067*53ee8cc1Swenshuai.xi #define unlink_large_chunk(M, X) {\
3068*53ee8cc1Swenshuai.xi   tchunkptr XP = X->parent;\
3069*53ee8cc1Swenshuai.xi   tchunkptr R;\
3070*53ee8cc1Swenshuai.xi   if (X->bk != X) {\
3071*53ee8cc1Swenshuai.xi     tchunkptr F = X->fd;\
3072*53ee8cc1Swenshuai.xi     R = X->bk;\
3073*53ee8cc1Swenshuai.xi     if (RTCHECK(ok_address(M, F))) {\
3074*53ee8cc1Swenshuai.xi       F->bk = R;\
3075*53ee8cc1Swenshuai.xi       R->fd = F;\
3076*53ee8cc1Swenshuai.xi     }\
3077*53ee8cc1Swenshuai.xi     else {\
3078*53ee8cc1Swenshuai.xi       CORRUPTION_ERROR_ACTION(M);\
3079*53ee8cc1Swenshuai.xi     }\
3080*53ee8cc1Swenshuai.xi   }\
3081*53ee8cc1Swenshuai.xi   else {\
3082*53ee8cc1Swenshuai.xi     tchunkptr* RP;\
3083*53ee8cc1Swenshuai.xi     if (((R = *(RP = &(X->child[1]))) != 0) ||\
3084*53ee8cc1Swenshuai.xi         ((R = *(RP = &(X->child[0]))) != 0)) {\
3085*53ee8cc1Swenshuai.xi       tchunkptr* CP;\
3086*53ee8cc1Swenshuai.xi       while ((*(CP = &(R->child[1])) != 0) ||\
3087*53ee8cc1Swenshuai.xi              (*(CP = &(R->child[0])) != 0)) {\
3088*53ee8cc1Swenshuai.xi         R = *(RP = CP);\
3089*53ee8cc1Swenshuai.xi       }\
3090*53ee8cc1Swenshuai.xi       if (RTCHECK(ok_address(M, RP)))\
3091*53ee8cc1Swenshuai.xi         *RP = 0;\
3092*53ee8cc1Swenshuai.xi       else {\
3093*53ee8cc1Swenshuai.xi         CORRUPTION_ERROR_ACTION(M);\
3094*53ee8cc1Swenshuai.xi       }\
3095*53ee8cc1Swenshuai.xi     }\
3096*53ee8cc1Swenshuai.xi   }\
3097*53ee8cc1Swenshuai.xi   if (XP != 0) {\
3098*53ee8cc1Swenshuai.xi     tbinptr* H = treebin_at(M, X->index);\
3099*53ee8cc1Swenshuai.xi     if (X == *H) {\
3100*53ee8cc1Swenshuai.xi       if ((*H = R) == 0) \
3101*53ee8cc1Swenshuai.xi         clear_treemap(M, X->index);\
3102*53ee8cc1Swenshuai.xi     }\
3103*53ee8cc1Swenshuai.xi     else if (RTCHECK(ok_address(M, XP))) {\
3104*53ee8cc1Swenshuai.xi       if (XP->child[0] == X) \
3105*53ee8cc1Swenshuai.xi         XP->child[0] = R;\
3106*53ee8cc1Swenshuai.xi       else \
3107*53ee8cc1Swenshuai.xi         XP->child[1] = R;\
3108*53ee8cc1Swenshuai.xi     }\
3109*53ee8cc1Swenshuai.xi     else\
3110*53ee8cc1Swenshuai.xi       CORRUPTION_ERROR_ACTION(M);\
3111*53ee8cc1Swenshuai.xi     if (R != 0) {\
3112*53ee8cc1Swenshuai.xi       if (RTCHECK(ok_address(M, R))) {\
3113*53ee8cc1Swenshuai.xi         tchunkptr C0, C1;\
3114*53ee8cc1Swenshuai.xi         R->parent = XP;\
3115*53ee8cc1Swenshuai.xi         if ((C0 = X->child[0]) != 0) {\
3116*53ee8cc1Swenshuai.xi           if (RTCHECK(ok_address(M, C0))) {\
3117*53ee8cc1Swenshuai.xi             R->child[0] = C0;\
3118*53ee8cc1Swenshuai.xi             C0->parent = R;\
3119*53ee8cc1Swenshuai.xi           }\
3120*53ee8cc1Swenshuai.xi           else\
3121*53ee8cc1Swenshuai.xi             CORRUPTION_ERROR_ACTION(M);\
3122*53ee8cc1Swenshuai.xi         }\
3123*53ee8cc1Swenshuai.xi         if ((C1 = X->child[1]) != 0) {\
3124*53ee8cc1Swenshuai.xi           if (RTCHECK(ok_address(M, C1))) {\
3125*53ee8cc1Swenshuai.xi             R->child[1] = C1;\
3126*53ee8cc1Swenshuai.xi             C1->parent = R;\
3127*53ee8cc1Swenshuai.xi           }\
3128*53ee8cc1Swenshuai.xi           else\
3129*53ee8cc1Swenshuai.xi             CORRUPTION_ERROR_ACTION(M);\
3130*53ee8cc1Swenshuai.xi         }\
3131*53ee8cc1Swenshuai.xi       }\
3132*53ee8cc1Swenshuai.xi       else\
3133*53ee8cc1Swenshuai.xi         CORRUPTION_ERROR_ACTION(M);\
3134*53ee8cc1Swenshuai.xi     }\
3135*53ee8cc1Swenshuai.xi   }\
3136*53ee8cc1Swenshuai.xi }
3137*53ee8cc1Swenshuai.xi 
3138*53ee8cc1Swenshuai.xi /* Relays to large vs small bin operations */
3139*53ee8cc1Swenshuai.xi 
3140*53ee8cc1Swenshuai.xi #define insert_chunk(M, P, S)\
3141*53ee8cc1Swenshuai.xi   if (is_small(S)) insert_small_chunk(M, P, S)\
3142*53ee8cc1Swenshuai.xi   else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3143*53ee8cc1Swenshuai.xi 
3144*53ee8cc1Swenshuai.xi #define unlink_chunk(M, P, S)\
3145*53ee8cc1Swenshuai.xi   if (is_small(S)) unlink_small_chunk(M, P, S)\
3146*53ee8cc1Swenshuai.xi   else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3147*53ee8cc1Swenshuai.xi 
3148*53ee8cc1Swenshuai.xi 
3149*53ee8cc1Swenshuai.xi /* Relays to internal calls to malloc/free from realloc, memalign etc */
3150*53ee8cc1Swenshuai.xi 
3151*53ee8cc1Swenshuai.xi #if ONLY_MSPACES
3152*53ee8cc1Swenshuai.xi #define internal_malloc(m, b) mspace_malloc(m, b)
3153*53ee8cc1Swenshuai.xi #define internal_free(m, mem) mspace_free(m,mem);
3154*53ee8cc1Swenshuai.xi #else /* ONLY_MSPACES */
3155*53ee8cc1Swenshuai.xi #if MSPACES
3156*53ee8cc1Swenshuai.xi #define internal_malloc(m, b)\
3157*53ee8cc1Swenshuai.xi    (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
3158*53ee8cc1Swenshuai.xi #define internal_free(m, mem)\
3159*53ee8cc1Swenshuai.xi    if (m == gm) dlfree(mem); else mspace_free(m,mem);
3160*53ee8cc1Swenshuai.xi #else /* MSPACES */
3161*53ee8cc1Swenshuai.xi #define internal_malloc(m, b) dlmalloc(b)
3162*53ee8cc1Swenshuai.xi #define internal_free(m, mem) dlfree(mem)
3163*53ee8cc1Swenshuai.xi #endif /* MSPACES */
3164*53ee8cc1Swenshuai.xi #endif /* ONLY_MSPACES */
3165*53ee8cc1Swenshuai.xi 
3166*53ee8cc1Swenshuai.xi /* -----------------------  Direct-mmapping chunks ----------------------- */
3167*53ee8cc1Swenshuai.xi 
3168*53ee8cc1Swenshuai.xi /*
3169*53ee8cc1Swenshuai.xi   Directly mmapped chunks are set up with an offset to the start of
3170*53ee8cc1Swenshuai.xi   the mmapped region stored in the prev_foot field of the chunk. This
3171*53ee8cc1Swenshuai.xi   allows reconstruction of the required argument to MUNMAP when freed,
3172*53ee8cc1Swenshuai.xi   and also allows adjustment of the returned chunk to meet alignment
3173*53ee8cc1Swenshuai.xi   requirements (especially in memalign).  There is also enough space
3174*53ee8cc1Swenshuai.xi   allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
3175*53ee8cc1Swenshuai.xi   the PINUSE bit so frees can be checked.
3176*53ee8cc1Swenshuai.xi */
3177*53ee8cc1Swenshuai.xi 
3178*53ee8cc1Swenshuai.xi /* Malloc using mmap */
mmap_alloc(mstate m,size_t nb)3179*53ee8cc1Swenshuai.xi static void* mmap_alloc(mstate m, size_t nb) {
3180*53ee8cc1Swenshuai.xi   size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3181*53ee8cc1Swenshuai.xi   if (mmsize > nb) {     /* Check for wrap around 0 */
3182*53ee8cc1Swenshuai.xi     char* mm = (char*)(DIRECT_MMAP(mmsize));
3183*53ee8cc1Swenshuai.xi     if (mm != CMFAIL) {
3184*53ee8cc1Swenshuai.xi       size_t offset = align_offset(chunk2mem(mm));
3185*53ee8cc1Swenshuai.xi       size_t psize = mmsize - offset - MMAP_FOOT_PAD;
3186*53ee8cc1Swenshuai.xi       mchunkptr p = (mchunkptr)((unsigned long)mm + offset);
3187*53ee8cc1Swenshuai.xi       p->prev_foot = offset | IS_MMAPPED_BIT;
3188*53ee8cc1Swenshuai.xi       (p)->head = (psize|CINUSE_BIT);
3189*53ee8cc1Swenshuai.xi       mark_inuse_foot(m, p, psize);
3190*53ee8cc1Swenshuai.xi       chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
3191*53ee8cc1Swenshuai.xi       chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
3192*53ee8cc1Swenshuai.xi 
3193*53ee8cc1Swenshuai.xi       if (mm < m->least_addr)
3194*53ee8cc1Swenshuai.xi         m->least_addr = mm;
3195*53ee8cc1Swenshuai.xi       if ((m->footprint += mmsize) > m->max_footprint)
3196*53ee8cc1Swenshuai.xi         m->max_footprint = m->footprint;
3197*53ee8cc1Swenshuai.xi       assert(is_aligned(chunk2mem(p)));
3198*53ee8cc1Swenshuai.xi       check_mmapped_chunk(m, p);
3199*53ee8cc1Swenshuai.xi       return chunk2mem(p);
3200*53ee8cc1Swenshuai.xi     }
3201*53ee8cc1Swenshuai.xi   }
3202*53ee8cc1Swenshuai.xi   return 0;
3203*53ee8cc1Swenshuai.xi }
3204*53ee8cc1Swenshuai.xi 
3205*53ee8cc1Swenshuai.xi /* Realloc using mmap */
mmap_resize(mstate m,mchunkptr oldp,size_t nb)3206*53ee8cc1Swenshuai.xi static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
3207*53ee8cc1Swenshuai.xi   size_t oldsize = chunksize(oldp);
3208*53ee8cc1Swenshuai.xi   if (is_small(nb)) /* Can't shrink mmap regions below small size */
3209*53ee8cc1Swenshuai.xi     return 0;
3210*53ee8cc1Swenshuai.xi   /* Keep old chunk if big enough but not too big */
3211*53ee8cc1Swenshuai.xi   if (oldsize >= nb + SIZE_T_SIZE &&
3212*53ee8cc1Swenshuai.xi       (oldsize - nb) <= (mparams.granularity << 1))
3213*53ee8cc1Swenshuai.xi     return oldp;
3214*53ee8cc1Swenshuai.xi   else {
3215*53ee8cc1Swenshuai.xi     size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
3216*53ee8cc1Swenshuai.xi     size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
3217*53ee8cc1Swenshuai.xi     size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
3218*53ee8cc1Swenshuai.xi                                          CHUNK_ALIGN_MASK);
3219*53ee8cc1Swenshuai.xi     char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
3220*53ee8cc1Swenshuai.xi                                   oldmmsize, newmmsize, 1);
3221*53ee8cc1Swenshuai.xi     if (cp != CMFAIL) {
3222*53ee8cc1Swenshuai.xi       mchunkptr newp = (mchunkptr)((unsigned long)cp + offset);
3223*53ee8cc1Swenshuai.xi       size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
3224*53ee8cc1Swenshuai.xi       newp->head = (psize|CINUSE_BIT);
3225*53ee8cc1Swenshuai.xi       mark_inuse_foot(m, newp, psize);
3226*53ee8cc1Swenshuai.xi       chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
3227*53ee8cc1Swenshuai.xi       chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
3228*53ee8cc1Swenshuai.xi 
3229*53ee8cc1Swenshuai.xi       if (cp < m->least_addr)
3230*53ee8cc1Swenshuai.xi         m->least_addr = cp;
3231*53ee8cc1Swenshuai.xi       if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
3232*53ee8cc1Swenshuai.xi         m->max_footprint = m->footprint;
3233*53ee8cc1Swenshuai.xi       check_mmapped_chunk(m, newp);
3234*53ee8cc1Swenshuai.xi       return newp;
3235*53ee8cc1Swenshuai.xi     }
3236*53ee8cc1Swenshuai.xi   }
3237*53ee8cc1Swenshuai.xi   return 0;
3238*53ee8cc1Swenshuai.xi }
3239*53ee8cc1Swenshuai.xi 
3240*53ee8cc1Swenshuai.xi /* -------------------------- mspace management -------------------------- */
3241*53ee8cc1Swenshuai.xi 
3242*53ee8cc1Swenshuai.xi /* Initialize top chunk and its size */
init_top(mstate m,mchunkptr p,size_t psize)3243*53ee8cc1Swenshuai.xi static void init_top(mstate m, mchunkptr p, size_t psize) {
3244*53ee8cc1Swenshuai.xi   /* Ensure alignment */
3245*53ee8cc1Swenshuai.xi   size_t offset = align_offset(chunk2mem(p));
3246*53ee8cc1Swenshuai.xi   p = (mchunkptr)((unsigned long)p + offset);
3247*53ee8cc1Swenshuai.xi   psize -= offset;
3248*53ee8cc1Swenshuai.xi 
3249*53ee8cc1Swenshuai.xi   m->top = p;
3250*53ee8cc1Swenshuai.xi   m->topsize = psize;
3251*53ee8cc1Swenshuai.xi   p->head = psize | PINUSE_BIT;
3252*53ee8cc1Swenshuai.xi   /* set size of fake trailing chunk holding overhead space only once */
3253*53ee8cc1Swenshuai.xi   chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
3254*53ee8cc1Swenshuai.xi   m->trim_check = mparams.trim_threshold; /* reset on each update */
3255*53ee8cc1Swenshuai.xi }
3256*53ee8cc1Swenshuai.xi 
3257*53ee8cc1Swenshuai.xi /* Initialize bins for a new mstate that is otherwise zeroed out */
init_bins(mstate m)3258*53ee8cc1Swenshuai.xi static void init_bins(mstate m) {
3259*53ee8cc1Swenshuai.xi   /* Establish circular links for smallbins */
3260*53ee8cc1Swenshuai.xi   bindex_t i;
3261*53ee8cc1Swenshuai.xi   for (i = 0; i < NSMALLBINS; ++i) {
3262*53ee8cc1Swenshuai.xi     sbinptr bin = smallbin_at(m,i);
3263*53ee8cc1Swenshuai.xi     bin->fd = bin->bk = bin;
3264*53ee8cc1Swenshuai.xi   }
3265*53ee8cc1Swenshuai.xi }
3266*53ee8cc1Swenshuai.xi 
3267*53ee8cc1Swenshuai.xi #if PROCEED_ON_ERROR
3268*53ee8cc1Swenshuai.xi 
3269*53ee8cc1Swenshuai.xi /* default corruption action */
reset_on_error(mstate m)3270*53ee8cc1Swenshuai.xi static void reset_on_error(mstate m) {
3271*53ee8cc1Swenshuai.xi   int i;
3272*53ee8cc1Swenshuai.xi   ++malloc_corruption_error_count;
3273*53ee8cc1Swenshuai.xi   /* Reinitialize fields to forget about all memory */
3274*53ee8cc1Swenshuai.xi   m->smallbins = m->treebins = 0;
3275*53ee8cc1Swenshuai.xi   m->dvsize = m->topsize = 0;
3276*53ee8cc1Swenshuai.xi   m->seg.base = 0;
3277*53ee8cc1Swenshuai.xi   m->seg.size = 0;
3278*53ee8cc1Swenshuai.xi   m->seg.next = 0;
3279*53ee8cc1Swenshuai.xi   m->top = m->dv = 0;
3280*53ee8cc1Swenshuai.xi   for (i = 0; i < NTREEBINS; ++i)
3281*53ee8cc1Swenshuai.xi     *treebin_at(m, i) = 0;
3282*53ee8cc1Swenshuai.xi   init_bins(m);
3283*53ee8cc1Swenshuai.xi }
3284*53ee8cc1Swenshuai.xi #endif /* PROCEED_ON_ERROR */
3285*53ee8cc1Swenshuai.xi 
3286*53ee8cc1Swenshuai.xi /* Allocate chunk and prepend remainder with chunk in successor base. */
prepend_alloc(mstate m,char * newbase,char * oldbase,size_t nb)3287*53ee8cc1Swenshuai.xi static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
3288*53ee8cc1Swenshuai.xi                            size_t nb) {
3289*53ee8cc1Swenshuai.xi   mchunkptr p = align_as_chunk(newbase);
3290*53ee8cc1Swenshuai.xi   mchunkptr oldfirst = align_as_chunk(oldbase);
3291*53ee8cc1Swenshuai.xi   size_t psize = (char*)oldfirst - (char*)p;
3292*53ee8cc1Swenshuai.xi   mchunkptr q = chunk_plus_offset(p, nb);
3293*53ee8cc1Swenshuai.xi   size_t qsize = psize - nb;
3294*53ee8cc1Swenshuai.xi   set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3295*53ee8cc1Swenshuai.xi 
3296*53ee8cc1Swenshuai.xi   assert((char*)oldfirst > (char*)q);
3297*53ee8cc1Swenshuai.xi   assert(pinuse(oldfirst));
3298*53ee8cc1Swenshuai.xi   assert(qsize >= MIN_CHUNK_SIZE);
3299*53ee8cc1Swenshuai.xi 
3300*53ee8cc1Swenshuai.xi   /* consolidate remainder with first chunk of old base */
3301*53ee8cc1Swenshuai.xi   if (oldfirst == m->top) {
3302*53ee8cc1Swenshuai.xi     size_t tsize = m->topsize += qsize;
3303*53ee8cc1Swenshuai.xi     m->top = q;
3304*53ee8cc1Swenshuai.xi     q->head = tsize | PINUSE_BIT;
3305*53ee8cc1Swenshuai.xi     check_top_chunk(m, q);
3306*53ee8cc1Swenshuai.xi   }
3307*53ee8cc1Swenshuai.xi   else if (oldfirst == m->dv) {
3308*53ee8cc1Swenshuai.xi     size_t dsize = m->dvsize += qsize;
3309*53ee8cc1Swenshuai.xi     m->dv = q;
3310*53ee8cc1Swenshuai.xi     set_size_and_pinuse_of_free_chunk(q, dsize);
3311*53ee8cc1Swenshuai.xi   }
3312*53ee8cc1Swenshuai.xi   else {
3313*53ee8cc1Swenshuai.xi     if (!cinuse(oldfirst)) {
3314*53ee8cc1Swenshuai.xi       size_t nsize = chunksize(oldfirst);
3315*53ee8cc1Swenshuai.xi       unlink_chunk(m, oldfirst, nsize);
3316*53ee8cc1Swenshuai.xi       oldfirst = chunk_plus_offset(oldfirst, nsize);
3317*53ee8cc1Swenshuai.xi       qsize += nsize;
3318*53ee8cc1Swenshuai.xi     }
3319*53ee8cc1Swenshuai.xi     set_free_with_pinuse(q, qsize, oldfirst);
3320*53ee8cc1Swenshuai.xi     insert_chunk(m, q, qsize);
3321*53ee8cc1Swenshuai.xi     check_free_chunk(m, q);
3322*53ee8cc1Swenshuai.xi   }
3323*53ee8cc1Swenshuai.xi 
3324*53ee8cc1Swenshuai.xi   check_malloced_chunk(m, chunk2mem(p), nb);
3325*53ee8cc1Swenshuai.xi   return chunk2mem(p);
3326*53ee8cc1Swenshuai.xi }
3327*53ee8cc1Swenshuai.xi 
3328*53ee8cc1Swenshuai.xi 
3329*53ee8cc1Swenshuai.xi /* Add a segment to hold a new noncontiguous region */
add_segment(mstate m,char * tbase,size_t tsize,flag_t mmapped)3330*53ee8cc1Swenshuai.xi static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
3331*53ee8cc1Swenshuai.xi   /* Determine locations and sizes of segment, fenceposts, old top */
3332*53ee8cc1Swenshuai.xi   char* old_top = (char*)m->top;
3333*53ee8cc1Swenshuai.xi   msegmentptr oldsp = segment_holding(m, old_top);
3334*53ee8cc1Swenshuai.xi   char* old_end = oldsp->base + oldsp->size;
3335*53ee8cc1Swenshuai.xi   size_t ssize = pad_request(sizeof(struct malloc_segment));
3336*53ee8cc1Swenshuai.xi   char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3337*53ee8cc1Swenshuai.xi   size_t offset = align_offset(chunk2mem(rawsp));
3338*53ee8cc1Swenshuai.xi   char* asp = rawsp + offset;
3339*53ee8cc1Swenshuai.xi   char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
3340*53ee8cc1Swenshuai.xi   mchunkptr sp = (mchunkptr)((unsigned long)csp);
3341*53ee8cc1Swenshuai.xi   msegmentptr ss = (msegmentptr)(chunk2mem(sp));
3342*53ee8cc1Swenshuai.xi   mchunkptr tnext = chunk_plus_offset(sp, ssize);
3343*53ee8cc1Swenshuai.xi   mchunkptr p = tnext;
3344*53ee8cc1Swenshuai.xi   int nfences = 0;
3345*53ee8cc1Swenshuai.xi 
3346*53ee8cc1Swenshuai.xi   /* reset top to new space */
3347*53ee8cc1Swenshuai.xi   init_top(m, (mchunkptr)((unsigned long)tbase), tsize - TOP_FOOT_SIZE);
3348*53ee8cc1Swenshuai.xi 
3349*53ee8cc1Swenshuai.xi   /* Set up segment record */
3350*53ee8cc1Swenshuai.xi   assert(is_aligned(ss));
3351*53ee8cc1Swenshuai.xi   set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
3352*53ee8cc1Swenshuai.xi   *ss = m->seg; /* Push current record */
3353*53ee8cc1Swenshuai.xi   m->seg.base = tbase;
3354*53ee8cc1Swenshuai.xi   m->seg.size = tsize;
3355*53ee8cc1Swenshuai.xi   m->seg.sflags = mmapped;
3356*53ee8cc1Swenshuai.xi   m->seg.next = ss;
3357*53ee8cc1Swenshuai.xi 
3358*53ee8cc1Swenshuai.xi   /* Insert trailing fenceposts */
3359*53ee8cc1Swenshuai.xi   for (;;) {
3360*53ee8cc1Swenshuai.xi     mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
3361*53ee8cc1Swenshuai.xi     p->head = FENCEPOST_HEAD;
3362*53ee8cc1Swenshuai.xi     ++nfences;
3363*53ee8cc1Swenshuai.xi     if ((char*)(&(nextp->head)) < old_end)
3364*53ee8cc1Swenshuai.xi       p = nextp;
3365*53ee8cc1Swenshuai.xi     else
3366*53ee8cc1Swenshuai.xi       break;
3367*53ee8cc1Swenshuai.xi   }
3368*53ee8cc1Swenshuai.xi   assert(nfences >= 2);
3369*53ee8cc1Swenshuai.xi 
3370*53ee8cc1Swenshuai.xi   /* Insert the rest of old top into a bin as an ordinary free chunk */
3371*53ee8cc1Swenshuai.xi   if (csp != old_top) {
3372*53ee8cc1Swenshuai.xi     mchunkptr q = (mchunkptr)((unsigned long)old_top);
3373*53ee8cc1Swenshuai.xi     size_t psize = csp - old_top;
3374*53ee8cc1Swenshuai.xi     mchunkptr tn = chunk_plus_offset(q, psize);
3375*53ee8cc1Swenshuai.xi     set_free_with_pinuse(q, psize, tn);
3376*53ee8cc1Swenshuai.xi     insert_chunk(m, q, psize);
3377*53ee8cc1Swenshuai.xi   }
3378*53ee8cc1Swenshuai.xi 
3379*53ee8cc1Swenshuai.xi   check_top_chunk(m, m->top);
3380*53ee8cc1Swenshuai.xi }
3381*53ee8cc1Swenshuai.xi 
3382*53ee8cc1Swenshuai.xi /* -------------------------- System allocation -------------------------- */
3383*53ee8cc1Swenshuai.xi 
3384*53ee8cc1Swenshuai.xi /* Get memory from system using MORECORE or MMAP */
sys_alloc(mstate m,size_t nb)3385*53ee8cc1Swenshuai.xi static void* sys_alloc(mstate m, size_t nb) {
3386*53ee8cc1Swenshuai.xi   char* tbase = CMFAIL;
3387*53ee8cc1Swenshuai.xi   size_t tsize = 0;
3388*53ee8cc1Swenshuai.xi   flag_t mmap_flag = 0;
3389*53ee8cc1Swenshuai.xi 
3390*53ee8cc1Swenshuai.xi   init_mparams();
3391*53ee8cc1Swenshuai.xi 
3392*53ee8cc1Swenshuai.xi   /* Directly map large chunks */
3393*53ee8cc1Swenshuai.xi   if (use_mmap(m) && nb >= mparams.mmap_threshold) {
3394*53ee8cc1Swenshuai.xi     void* mem = mmap_alloc(m, nb);
3395*53ee8cc1Swenshuai.xi     if (mem != 0)
3396*53ee8cc1Swenshuai.xi       return mem;
3397*53ee8cc1Swenshuai.xi   }
3398*53ee8cc1Swenshuai.xi 
3399*53ee8cc1Swenshuai.xi   /*
3400*53ee8cc1Swenshuai.xi     Try getting memory in any of three ways (in most-preferred to
3401*53ee8cc1Swenshuai.xi     least-preferred order):
3402*53ee8cc1Swenshuai.xi     1. A call to MORECORE that can normally contiguously extend memory.
3403*53ee8cc1Swenshuai.xi        (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
3404*53ee8cc1Swenshuai.xi        or main space is mmapped or a previous contiguous call failed)
3405*53ee8cc1Swenshuai.xi     2. A call to MMAP new space (disabled if not HAVE_MMAP).
3406*53ee8cc1Swenshuai.xi        Note that under the default settings, if MORECORE is unable to
3407*53ee8cc1Swenshuai.xi        fulfill a request, and HAVE_MMAP is true, then mmap is
3408*53ee8cc1Swenshuai.xi        used as a noncontiguous system allocator. This is a useful backup
3409*53ee8cc1Swenshuai.xi        strategy for systems with holes in address spaces -- in this case
3410*53ee8cc1Swenshuai.xi        sbrk cannot contiguously expand the heap, but mmap may be able to
3411*53ee8cc1Swenshuai.xi        find space.
3412*53ee8cc1Swenshuai.xi     3. A call to MORECORE that cannot usually contiguously extend memory.
3413*53ee8cc1Swenshuai.xi        (disabled if not HAVE_MORECORE)
3414*53ee8cc1Swenshuai.xi   */
3415*53ee8cc1Swenshuai.xi 
3416*53ee8cc1Swenshuai.xi   if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
3417*53ee8cc1Swenshuai.xi     char* br = CMFAIL;
3418*53ee8cc1Swenshuai.xi     msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
3419*53ee8cc1Swenshuai.xi     size_t asize = 0;
3420*53ee8cc1Swenshuai.xi     {
3421*53ee8cc1Swenshuai.xi         int ret_value=ACQUIRE_MORECORE_LOCK();
3422*53ee8cc1Swenshuai.xi         if(ret_value)printf("%s.%d lock error\n",__FUNCTION__,__LINE__);
3423*53ee8cc1Swenshuai.xi     }
3424*53ee8cc1Swenshuai.xi 
3425*53ee8cc1Swenshuai.xi     if (ss == 0) {  /* First time through or recovery */
3426*53ee8cc1Swenshuai.xi       char* base = (char*)CALL_MORECORE(0);
3427*53ee8cc1Swenshuai.xi       if (base != CMFAIL) {
3428*53ee8cc1Swenshuai.xi         asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
3429*53ee8cc1Swenshuai.xi         /* Adjust to end on a page boundary */
3430*53ee8cc1Swenshuai.xi         if (!is_page_aligned(base))
3431*53ee8cc1Swenshuai.xi           asize += (page_align((size_t)base) - (size_t)base);
3432*53ee8cc1Swenshuai.xi         /* Can't call MORECORE if size is negative when treated as signed */
3433*53ee8cc1Swenshuai.xi         if (asize < HALF_MAX_SIZE_T &&
3434*53ee8cc1Swenshuai.xi             (br = (char*)(CALL_MORECORE(asize))) == base) {
3435*53ee8cc1Swenshuai.xi           tbase = base;
3436*53ee8cc1Swenshuai.xi           tsize = asize;
3437*53ee8cc1Swenshuai.xi         }
3438*53ee8cc1Swenshuai.xi       }
3439*53ee8cc1Swenshuai.xi     }
3440*53ee8cc1Swenshuai.xi     else {
3441*53ee8cc1Swenshuai.xi       /* Subtract out existing available top space from MORECORE request. */
3442*53ee8cc1Swenshuai.xi       asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
3443*53ee8cc1Swenshuai.xi       /* Use mem here only if it did continuously extend old space */
3444*53ee8cc1Swenshuai.xi       if (asize < HALF_MAX_SIZE_T &&
3445*53ee8cc1Swenshuai.xi           (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
3446*53ee8cc1Swenshuai.xi         tbase = br;
3447*53ee8cc1Swenshuai.xi         tsize = asize;
3448*53ee8cc1Swenshuai.xi       }
3449*53ee8cc1Swenshuai.xi     }
3450*53ee8cc1Swenshuai.xi 
3451*53ee8cc1Swenshuai.xi     if (tbase == CMFAIL) {    /* Cope with partial failure */
3452*53ee8cc1Swenshuai.xi       if (br != CMFAIL) {    /* Try to use/extend the space we did get */
3453*53ee8cc1Swenshuai.xi         if (asize < HALF_MAX_SIZE_T &&
3454*53ee8cc1Swenshuai.xi             asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
3455*53ee8cc1Swenshuai.xi           size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
3456*53ee8cc1Swenshuai.xi           if (esize < HALF_MAX_SIZE_T) {
3457*53ee8cc1Swenshuai.xi             char* end = (char*)CALL_MORECORE(esize);
3458*53ee8cc1Swenshuai.xi             if (end != CMFAIL)
3459*53ee8cc1Swenshuai.xi               asize += esize;
3460*53ee8cc1Swenshuai.xi             else {            /* Can't use; try to release */
3461*53ee8cc1Swenshuai.xi               CALL_MORECORE(-asize);
3462*53ee8cc1Swenshuai.xi               br = CMFAIL;
3463*53ee8cc1Swenshuai.xi             }
3464*53ee8cc1Swenshuai.xi           }
3465*53ee8cc1Swenshuai.xi         }
3466*53ee8cc1Swenshuai.xi       }
3467*53ee8cc1Swenshuai.xi       if (br != CMFAIL) {    /* Use the space we did get */
3468*53ee8cc1Swenshuai.xi         tbase = br;
3469*53ee8cc1Swenshuai.xi         tsize = asize;
3470*53ee8cc1Swenshuai.xi       }
3471*53ee8cc1Swenshuai.xi       else
3472*53ee8cc1Swenshuai.xi         disable_contiguous(m); /* Don't try contiguous path in the future */
3473*53ee8cc1Swenshuai.xi     }
3474*53ee8cc1Swenshuai.xi 
3475*53ee8cc1Swenshuai.xi     RELEASE_MORECORE_LOCK();
3476*53ee8cc1Swenshuai.xi   }
3477*53ee8cc1Swenshuai.xi 
3478*53ee8cc1Swenshuai.xi   if (HAVE_MMAP && tbase == CMFAIL) {  /* Try MMAP */
3479*53ee8cc1Swenshuai.xi     size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
3480*53ee8cc1Swenshuai.xi     size_t rsize = granularity_align(req);
3481*53ee8cc1Swenshuai.xi     if (rsize > nb) { /* Fail if wraps around zero */
3482*53ee8cc1Swenshuai.xi       char* mp = (char*)(CALL_MMAP(rsize));
3483*53ee8cc1Swenshuai.xi       if (mp != CMFAIL) {
3484*53ee8cc1Swenshuai.xi         tbase = mp;
3485*53ee8cc1Swenshuai.xi         tsize = rsize;
3486*53ee8cc1Swenshuai.xi         mmap_flag = IS_MMAPPED_BIT;
3487*53ee8cc1Swenshuai.xi       }
3488*53ee8cc1Swenshuai.xi     }
3489*53ee8cc1Swenshuai.xi   }
3490*53ee8cc1Swenshuai.xi 
3491*53ee8cc1Swenshuai.xi   if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
3492*53ee8cc1Swenshuai.xi     size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
3493*53ee8cc1Swenshuai.xi     if (asize < HALF_MAX_SIZE_T) {
3494*53ee8cc1Swenshuai.xi       char* br = CMFAIL;
3495*53ee8cc1Swenshuai.xi       char* end = CMFAIL;
3496*53ee8cc1Swenshuai.xi       {
3497*53ee8cc1Swenshuai.xi         int ret_value=ACQUIRE_MORECORE_LOCK();
3498*53ee8cc1Swenshuai.xi         if(ret_value)printf("%s.%d lock error\n",__FUNCTION__,__LINE__);
3499*53ee8cc1Swenshuai.xi       }
3500*53ee8cc1Swenshuai.xi       br = (char*)(CALL_MORECORE(asize));
3501*53ee8cc1Swenshuai.xi       end = (char*)(CALL_MORECORE(0));
3502*53ee8cc1Swenshuai.xi       RELEASE_MORECORE_LOCK();
3503*53ee8cc1Swenshuai.xi       if (br != CMFAIL && end != CMFAIL && br < end) {
3504*53ee8cc1Swenshuai.xi         size_t ssize = end - br;
3505*53ee8cc1Swenshuai.xi         if (ssize > nb + TOP_FOOT_SIZE) {
3506*53ee8cc1Swenshuai.xi           tbase = br;
3507*53ee8cc1Swenshuai.xi           tsize = ssize;
3508*53ee8cc1Swenshuai.xi         }
3509*53ee8cc1Swenshuai.xi       }
3510*53ee8cc1Swenshuai.xi     }
3511*53ee8cc1Swenshuai.xi   }
3512*53ee8cc1Swenshuai.xi 
3513*53ee8cc1Swenshuai.xi   if (tbase != CMFAIL) {
3514*53ee8cc1Swenshuai.xi 
3515*53ee8cc1Swenshuai.xi     if ((m->footprint += tsize) > m->max_footprint)
3516*53ee8cc1Swenshuai.xi       m->max_footprint = m->footprint;
3517*53ee8cc1Swenshuai.xi 
3518*53ee8cc1Swenshuai.xi     if (!is_initialized(m)) { /* first-time initialization */
3519*53ee8cc1Swenshuai.xi       m->seg.base = m->least_addr = tbase;
3520*53ee8cc1Swenshuai.xi       m->seg.size = tsize;
3521*53ee8cc1Swenshuai.xi       m->seg.sflags = mmap_flag;
3522*53ee8cc1Swenshuai.xi       m->magic = mparams.magic;
3523*53ee8cc1Swenshuai.xi       init_bins(m);
3524*53ee8cc1Swenshuai.xi       if (is_global(m))
3525*53ee8cc1Swenshuai.xi         init_top(m, (mchunkptr)((unsigned long)tbase), tsize - TOP_FOOT_SIZE);
3526*53ee8cc1Swenshuai.xi       else {
3527*53ee8cc1Swenshuai.xi         /* Offset top by embedded malloc_state */
3528*53ee8cc1Swenshuai.xi         mchunkptr mn = next_chunk(mem2chunk(m));
3529*53ee8cc1Swenshuai.xi         init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
3530*53ee8cc1Swenshuai.xi       }
3531*53ee8cc1Swenshuai.xi     }
3532*53ee8cc1Swenshuai.xi 
3533*53ee8cc1Swenshuai.xi     else {
3534*53ee8cc1Swenshuai.xi       /* Try to merge with an existing segment */
3535*53ee8cc1Swenshuai.xi       msegmentptr sp = &m->seg;
3536*53ee8cc1Swenshuai.xi       while (sp != 0 && tbase != sp->base + sp->size)
3537*53ee8cc1Swenshuai.xi         sp = sp->next;
3538*53ee8cc1Swenshuai.xi       if (sp != 0 &&
3539*53ee8cc1Swenshuai.xi           !is_extern_segment(sp) &&
3540*53ee8cc1Swenshuai.xi           (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
3541*53ee8cc1Swenshuai.xi           segment_holds(sp, m->top)) { /* append */
3542*53ee8cc1Swenshuai.xi         sp->size += tsize;
3543*53ee8cc1Swenshuai.xi         init_top(m, m->top, m->topsize + tsize);
3544*53ee8cc1Swenshuai.xi       }
3545*53ee8cc1Swenshuai.xi       else {
3546*53ee8cc1Swenshuai.xi         if (tbase < m->least_addr)
3547*53ee8cc1Swenshuai.xi           m->least_addr = tbase;
3548*53ee8cc1Swenshuai.xi         sp = &m->seg;
3549*53ee8cc1Swenshuai.xi         while (sp != 0 && sp->base != tbase + tsize)
3550*53ee8cc1Swenshuai.xi           sp = sp->next;
3551*53ee8cc1Swenshuai.xi         if (sp != 0 &&
3552*53ee8cc1Swenshuai.xi             !is_extern_segment(sp) &&
3553*53ee8cc1Swenshuai.xi             (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
3554*53ee8cc1Swenshuai.xi           char* oldbase = sp->base;
3555*53ee8cc1Swenshuai.xi           sp->base = tbase;
3556*53ee8cc1Swenshuai.xi           sp->size += tsize;
3557*53ee8cc1Swenshuai.xi           return prepend_alloc(m, tbase, oldbase, nb);
3558*53ee8cc1Swenshuai.xi         }
3559*53ee8cc1Swenshuai.xi         else
3560*53ee8cc1Swenshuai.xi           add_segment(m, tbase, tsize, mmap_flag);
3561*53ee8cc1Swenshuai.xi       }
3562*53ee8cc1Swenshuai.xi     }
3563*53ee8cc1Swenshuai.xi 
3564*53ee8cc1Swenshuai.xi     if (nb < m->topsize) { /* Allocate from new or extended top space */
3565*53ee8cc1Swenshuai.xi       size_t rsize = m->topsize -= nb;
3566*53ee8cc1Swenshuai.xi       mchunkptr p = m->top;
3567*53ee8cc1Swenshuai.xi       mchunkptr r = m->top = chunk_plus_offset(p, nb);
3568*53ee8cc1Swenshuai.xi       r->head = rsize | PINUSE_BIT;
3569*53ee8cc1Swenshuai.xi       set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3570*53ee8cc1Swenshuai.xi       check_top_chunk(m, m->top);
3571*53ee8cc1Swenshuai.xi       check_malloced_chunk(m, chunk2mem(p), nb);
3572*53ee8cc1Swenshuai.xi       return chunk2mem(p);
3573*53ee8cc1Swenshuai.xi     }
3574*53ee8cc1Swenshuai.xi   }
3575*53ee8cc1Swenshuai.xi 
3576*53ee8cc1Swenshuai.xi   MALLOC_FAILURE_ACTION;
3577*53ee8cc1Swenshuai.xi   return 0;
3578*53ee8cc1Swenshuai.xi }
3579*53ee8cc1Swenshuai.xi 
3580*53ee8cc1Swenshuai.xi /* -----------------------  system deallocation -------------------------- */
3581*53ee8cc1Swenshuai.xi 
3582*53ee8cc1Swenshuai.xi /* Unmap and unlink any mmapped segments that don't contain used chunks */
release_unused_segments(mstate m)3583*53ee8cc1Swenshuai.xi static size_t release_unused_segments(mstate m) {
3584*53ee8cc1Swenshuai.xi   size_t released = 0;
3585*53ee8cc1Swenshuai.xi   msegmentptr pred = &m->seg;
3586*53ee8cc1Swenshuai.xi   msegmentptr sp = pred->next;
3587*53ee8cc1Swenshuai.xi   while (sp != 0) {
3588*53ee8cc1Swenshuai.xi     char* base = sp->base;
3589*53ee8cc1Swenshuai.xi     size_t size = sp->size;
3590*53ee8cc1Swenshuai.xi     msegmentptr next = sp->next;
3591*53ee8cc1Swenshuai.xi     if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
3592*53ee8cc1Swenshuai.xi       mchunkptr p = align_as_chunk(base);
3593*53ee8cc1Swenshuai.xi       size_t psize = chunksize(p);
3594*53ee8cc1Swenshuai.xi       /* Can unmap if first chunk holds entire segment and not pinned */
3595*53ee8cc1Swenshuai.xi       if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
3596*53ee8cc1Swenshuai.xi         tchunkptr tp = (tchunkptr)p;
3597*53ee8cc1Swenshuai.xi         assert(segment_holds(sp, (char*)sp));
3598*53ee8cc1Swenshuai.xi         if (p == m->dv) {
3599*53ee8cc1Swenshuai.xi           m->dv = 0;
3600*53ee8cc1Swenshuai.xi           m->dvsize = 0;
3601*53ee8cc1Swenshuai.xi         }
3602*53ee8cc1Swenshuai.xi         else {
3603*53ee8cc1Swenshuai.xi           unlink_large_chunk(m, tp);
3604*53ee8cc1Swenshuai.xi         }
3605*53ee8cc1Swenshuai.xi         if (CALL_MUNMAP(base, size) == 0) {
3606*53ee8cc1Swenshuai.xi           released += size;
3607*53ee8cc1Swenshuai.xi           m->footprint -= size;
3608*53ee8cc1Swenshuai.xi           /* unlink obsoleted record */
3609*53ee8cc1Swenshuai.xi           sp = pred;
3610*53ee8cc1Swenshuai.xi           sp->next = next;
3611*53ee8cc1Swenshuai.xi         }
3612*53ee8cc1Swenshuai.xi         else { /* back out if cannot unmap */
3613*53ee8cc1Swenshuai.xi           insert_large_chunk(m, tp, psize);
3614*53ee8cc1Swenshuai.xi         }
3615*53ee8cc1Swenshuai.xi       }
3616*53ee8cc1Swenshuai.xi     }
3617*53ee8cc1Swenshuai.xi     pred = sp;
3618*53ee8cc1Swenshuai.xi     sp = next;
3619*53ee8cc1Swenshuai.xi   }
3620*53ee8cc1Swenshuai.xi   return released;
3621*53ee8cc1Swenshuai.xi }
3622*53ee8cc1Swenshuai.xi 
sys_trim(mstate m,size_t pad)3623*53ee8cc1Swenshuai.xi static int sys_trim(mstate m, size_t pad) {
3624*53ee8cc1Swenshuai.xi   size_t released = 0;
3625*53ee8cc1Swenshuai.xi   if (pad < MAX_REQUEST && is_initialized(m)) {
3626*53ee8cc1Swenshuai.xi     pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
3627*53ee8cc1Swenshuai.xi 
3628*53ee8cc1Swenshuai.xi     if (m->topsize > pad) {
3629*53ee8cc1Swenshuai.xi       /* Shrink top space in granularity-size units, keeping at least one */
3630*53ee8cc1Swenshuai.xi       size_t unit = mparams.granularity;
3631*53ee8cc1Swenshuai.xi       size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
3632*53ee8cc1Swenshuai.xi                       SIZE_T_ONE) * unit;
3633*53ee8cc1Swenshuai.xi       msegmentptr sp = segment_holding(m, (char*)m->top);
3634*53ee8cc1Swenshuai.xi 
3635*53ee8cc1Swenshuai.xi       if (!is_extern_segment(sp)) {
3636*53ee8cc1Swenshuai.xi         if (is_mmapped_segment(sp)) {
3637*53ee8cc1Swenshuai.xi           if (HAVE_MMAP &&
3638*53ee8cc1Swenshuai.xi               sp->size >= extra &&
3639*53ee8cc1Swenshuai.xi               !has_segment_link(m, sp)) { /* can't shrink if pinned */
3640*53ee8cc1Swenshuai.xi             //size_t newsize = sp->size - extra;
3641*53ee8cc1Swenshuai.xi             /* Prefer mremap, fall back to munmap */
3642*53ee8cc1Swenshuai.xi             if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
3643*53ee8cc1Swenshuai.xi                 (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
3644*53ee8cc1Swenshuai.xi               released = extra;
3645*53ee8cc1Swenshuai.xi             }
3646*53ee8cc1Swenshuai.xi           }
3647*53ee8cc1Swenshuai.xi         }
3648*53ee8cc1Swenshuai.xi         else if (HAVE_MORECORE) {
3649*53ee8cc1Swenshuai.xi           if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
3650*53ee8cc1Swenshuai.xi             extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
3651*53ee8cc1Swenshuai.xi             {
3652*53ee8cc1Swenshuai.xi                 int ret_value=ACQUIRE_MORECORE_LOCK();
3653*53ee8cc1Swenshuai.xi                 if(ret_value)printf("%s.%d lock error\n",__FUNCTION__,__LINE__);
3654*53ee8cc1Swenshuai.xi             }
3655*53ee8cc1Swenshuai.xi           {
3656*53ee8cc1Swenshuai.xi             /* Make sure end of memory is where we last set it. */
3657*53ee8cc1Swenshuai.xi             char* old_br = (char*)(CALL_MORECORE(0));
3658*53ee8cc1Swenshuai.xi             if (old_br == sp->base + sp->size) {
3659*53ee8cc1Swenshuai.xi               char* rel_br = (char*)(CALL_MORECORE(-extra));
3660*53ee8cc1Swenshuai.xi               char* new_br = (char*)(CALL_MORECORE(0));
3661*53ee8cc1Swenshuai.xi               if (rel_br != CMFAIL && new_br < old_br)
3662*53ee8cc1Swenshuai.xi                 released = old_br - new_br;
3663*53ee8cc1Swenshuai.xi             }
3664*53ee8cc1Swenshuai.xi           }
3665*53ee8cc1Swenshuai.xi           RELEASE_MORECORE_LOCK();
3666*53ee8cc1Swenshuai.xi         }
3667*53ee8cc1Swenshuai.xi       }
3668*53ee8cc1Swenshuai.xi 
3669*53ee8cc1Swenshuai.xi       if (released != 0) {
3670*53ee8cc1Swenshuai.xi         sp->size -= released;
3671*53ee8cc1Swenshuai.xi         m->footprint -= released;
3672*53ee8cc1Swenshuai.xi         init_top(m, m->top, m->topsize - released);
3673*53ee8cc1Swenshuai.xi         check_top_chunk(m, m->top);
3674*53ee8cc1Swenshuai.xi       }
3675*53ee8cc1Swenshuai.xi     }
3676*53ee8cc1Swenshuai.xi 
3677*53ee8cc1Swenshuai.xi     /* Unmap any unused mmapped segments */
3678*53ee8cc1Swenshuai.xi     if (HAVE_MMAP)
3679*53ee8cc1Swenshuai.xi       released += release_unused_segments(m);
3680*53ee8cc1Swenshuai.xi 
3681*53ee8cc1Swenshuai.xi     /* On failure, disable autotrim to avoid repeated failed future calls */
3682*53ee8cc1Swenshuai.xi     if (released == 0)
3683*53ee8cc1Swenshuai.xi       m->trim_check = MAX_SIZE_T;
3684*53ee8cc1Swenshuai.xi   }
3685*53ee8cc1Swenshuai.xi 
3686*53ee8cc1Swenshuai.xi   return (released != 0)? 1 : 0;
3687*53ee8cc1Swenshuai.xi }
3688*53ee8cc1Swenshuai.xi 
3689*53ee8cc1Swenshuai.xi /* ---------------------------- malloc support --------------------------- */
3690*53ee8cc1Swenshuai.xi 
3691*53ee8cc1Swenshuai.xi /* allocate a large request from the best fitting chunk in a treebin */
tmalloc_large(mstate m,size_t nb)3692*53ee8cc1Swenshuai.xi static void* tmalloc_large(mstate m, size_t nb) {
3693*53ee8cc1Swenshuai.xi   tchunkptr v = 0;
3694*53ee8cc1Swenshuai.xi   size_t rsize = -nb; /* Unsigned negation */
3695*53ee8cc1Swenshuai.xi   tchunkptr t;
3696*53ee8cc1Swenshuai.xi   bindex_t idx;
3697*53ee8cc1Swenshuai.xi   compute_tree_index(nb, idx);
3698*53ee8cc1Swenshuai.xi 
3699*53ee8cc1Swenshuai.xi   if ((t = *treebin_at(m, idx)) != 0) {
3700*53ee8cc1Swenshuai.xi     /* Traverse tree for this bin looking for node with size == nb */
3701*53ee8cc1Swenshuai.xi     size_t sizebits = nb << leftshift_for_tree_index(idx);
3702*53ee8cc1Swenshuai.xi     tchunkptr rst = 0;  /* The deepest untaken right subtree */
3703*53ee8cc1Swenshuai.xi     for (;;) {
3704*53ee8cc1Swenshuai.xi       tchunkptr rt;
3705*53ee8cc1Swenshuai.xi       size_t trem = chunksize(t) - nb;
3706*53ee8cc1Swenshuai.xi       if (trem < rsize) {
3707*53ee8cc1Swenshuai.xi         v = t;
3708*53ee8cc1Swenshuai.xi         if ((rsize = trem) == 0)
3709*53ee8cc1Swenshuai.xi           break;
3710*53ee8cc1Swenshuai.xi       }
3711*53ee8cc1Swenshuai.xi       rt = t->child[1];
3712*53ee8cc1Swenshuai.xi       t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3713*53ee8cc1Swenshuai.xi       if (rt != 0 && rt != t)
3714*53ee8cc1Swenshuai.xi         rst = rt;
3715*53ee8cc1Swenshuai.xi       if (t == 0) {
3716*53ee8cc1Swenshuai.xi         t = rst; /* set t to least subtree holding sizes > nb */
3717*53ee8cc1Swenshuai.xi         break;
3718*53ee8cc1Swenshuai.xi       }
3719*53ee8cc1Swenshuai.xi       sizebits <<= 1;
3720*53ee8cc1Swenshuai.xi     }
3721*53ee8cc1Swenshuai.xi   }
3722*53ee8cc1Swenshuai.xi 
3723*53ee8cc1Swenshuai.xi   if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
3724*53ee8cc1Swenshuai.xi     binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
3725*53ee8cc1Swenshuai.xi     if (leftbits != 0) {
3726*53ee8cc1Swenshuai.xi       bindex_t i;
3727*53ee8cc1Swenshuai.xi       binmap_t leastbit = least_bit(leftbits);
3728*53ee8cc1Swenshuai.xi       compute_bit2idx(leastbit, i);
3729*53ee8cc1Swenshuai.xi       t = *treebin_at(m, i);
3730*53ee8cc1Swenshuai.xi     }
3731*53ee8cc1Swenshuai.xi   }
3732*53ee8cc1Swenshuai.xi 
3733*53ee8cc1Swenshuai.xi   while (t != 0) { /* find smallest of tree or subtree */
3734*53ee8cc1Swenshuai.xi     size_t trem = chunksize(t) - nb;
3735*53ee8cc1Swenshuai.xi     if (trem < rsize) {
3736*53ee8cc1Swenshuai.xi       rsize = trem;
3737*53ee8cc1Swenshuai.xi       v = t;
3738*53ee8cc1Swenshuai.xi     }
3739*53ee8cc1Swenshuai.xi     t = leftmost_child(t);
3740*53ee8cc1Swenshuai.xi   }
3741*53ee8cc1Swenshuai.xi 
3742*53ee8cc1Swenshuai.xi   /*  If dv is a better fit, return 0 so malloc will use it */
3743*53ee8cc1Swenshuai.xi   if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
3744*53ee8cc1Swenshuai.xi     if (RTCHECK(ok_address(m, v))) { /* split */
3745*53ee8cc1Swenshuai.xi       mchunkptr r = chunk_plus_offset(v, nb);
3746*53ee8cc1Swenshuai.xi       assert(chunksize(v) == rsize + nb);
3747*53ee8cc1Swenshuai.xi       if (RTCHECK(ok_next(v, r))) {
3748*53ee8cc1Swenshuai.xi         unlink_large_chunk(m, v);
3749*53ee8cc1Swenshuai.xi         if (rsize < MIN_CHUNK_SIZE)
3750*53ee8cc1Swenshuai.xi           set_inuse_and_pinuse(m, v, (rsize + nb));
3751*53ee8cc1Swenshuai.xi         else {
3752*53ee8cc1Swenshuai.xi           set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3753*53ee8cc1Swenshuai.xi           set_size_and_pinuse_of_free_chunk(r, rsize);
3754*53ee8cc1Swenshuai.xi           insert_chunk(m, r, rsize);
3755*53ee8cc1Swenshuai.xi         }
3756*53ee8cc1Swenshuai.xi         return chunk2mem(v);
3757*53ee8cc1Swenshuai.xi       }
3758*53ee8cc1Swenshuai.xi     }
3759*53ee8cc1Swenshuai.xi     CORRUPTION_ERROR_ACTION(m);
3760*53ee8cc1Swenshuai.xi   }
3761*53ee8cc1Swenshuai.xi   return 0;
3762*53ee8cc1Swenshuai.xi }
3763*53ee8cc1Swenshuai.xi 
3764*53ee8cc1Swenshuai.xi /* allocate a small request from the best fitting chunk in a treebin */
tmalloc_small(mstate m,size_t nb)3765*53ee8cc1Swenshuai.xi static void* tmalloc_small(mstate m, size_t nb) {
3766*53ee8cc1Swenshuai.xi   tchunkptr t, v;
3767*53ee8cc1Swenshuai.xi   size_t rsize;
3768*53ee8cc1Swenshuai.xi   bindex_t i;
3769*53ee8cc1Swenshuai.xi   binmap_t leastbit = least_bit(m->treemap);
3770*53ee8cc1Swenshuai.xi   compute_bit2idx(leastbit, i);
3771*53ee8cc1Swenshuai.xi 
3772*53ee8cc1Swenshuai.xi   v = t = *treebin_at(m, i);
3773*53ee8cc1Swenshuai.xi   rsize = chunksize(t) - nb;
3774*53ee8cc1Swenshuai.xi 
3775*53ee8cc1Swenshuai.xi   while ((t = leftmost_child(t)) != 0) {
3776*53ee8cc1Swenshuai.xi     size_t trem = chunksize(t) - nb;
3777*53ee8cc1Swenshuai.xi     if (trem < rsize) {
3778*53ee8cc1Swenshuai.xi       rsize = trem;
3779*53ee8cc1Swenshuai.xi       v = t;
3780*53ee8cc1Swenshuai.xi     }
3781*53ee8cc1Swenshuai.xi   }
3782*53ee8cc1Swenshuai.xi 
3783*53ee8cc1Swenshuai.xi   if (RTCHECK(ok_address(m, v))) {
3784*53ee8cc1Swenshuai.xi     mchunkptr r = chunk_plus_offset(v, nb);
3785*53ee8cc1Swenshuai.xi     assert(chunksize(v) == rsize + nb);
3786*53ee8cc1Swenshuai.xi     if (RTCHECK(ok_next(v, r))) {
3787*53ee8cc1Swenshuai.xi       unlink_large_chunk(m, v);
3788*53ee8cc1Swenshuai.xi       if (rsize < MIN_CHUNK_SIZE)
3789*53ee8cc1Swenshuai.xi         set_inuse_and_pinuse(m, v, (rsize + nb));
3790*53ee8cc1Swenshuai.xi       else {
3791*53ee8cc1Swenshuai.xi         set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3792*53ee8cc1Swenshuai.xi         set_size_and_pinuse_of_free_chunk(r, rsize);
3793*53ee8cc1Swenshuai.xi         replace_dv(m, r, rsize);
3794*53ee8cc1Swenshuai.xi       }
3795*53ee8cc1Swenshuai.xi       return chunk2mem(v);
3796*53ee8cc1Swenshuai.xi     }
3797*53ee8cc1Swenshuai.xi   }
3798*53ee8cc1Swenshuai.xi 
3799*53ee8cc1Swenshuai.xi   CORRUPTION_ERROR_ACTION(m);
3800*53ee8cc1Swenshuai.xi   return 0;
3801*53ee8cc1Swenshuai.xi }
3802*53ee8cc1Swenshuai.xi 
3803*53ee8cc1Swenshuai.xi /* --------------------------- realloc support --------------------------- */
3804*53ee8cc1Swenshuai.xi 
internal_realloc(mstate m,void * oldmem,size_t bytes)3805*53ee8cc1Swenshuai.xi static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
3806*53ee8cc1Swenshuai.xi   if (bytes >= MAX_REQUEST) {
3807*53ee8cc1Swenshuai.xi     MALLOC_FAILURE_ACTION;
3808*53ee8cc1Swenshuai.xi     return 0;
3809*53ee8cc1Swenshuai.xi   }
3810*53ee8cc1Swenshuai.xi   if (!PREACTION(m)) {
3811*53ee8cc1Swenshuai.xi     mchunkptr oldp = mem2chunk(oldmem);
3812*53ee8cc1Swenshuai.xi     size_t oldsize = chunksize(oldp);
3813*53ee8cc1Swenshuai.xi     mchunkptr next = chunk_plus_offset(oldp, oldsize);
3814*53ee8cc1Swenshuai.xi     mchunkptr newp = 0;
3815*53ee8cc1Swenshuai.xi     void* extra = 0;
3816*53ee8cc1Swenshuai.xi 
3817*53ee8cc1Swenshuai.xi     /* Try to either shrink or extend into top. Else malloc-copy-free */
3818*53ee8cc1Swenshuai.xi 
3819*53ee8cc1Swenshuai.xi     if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
3820*53ee8cc1Swenshuai.xi                 ok_next(oldp, next) && ok_pinuse(next))) {
3821*53ee8cc1Swenshuai.xi       size_t nb = request2size(bytes);
3822*53ee8cc1Swenshuai.xi       if (is_mmapped(oldp))
3823*53ee8cc1Swenshuai.xi         newp = mmap_resize(m, oldp, nb);
3824*53ee8cc1Swenshuai.xi       else if (oldsize >= nb) { /* already big enough */
3825*53ee8cc1Swenshuai.xi         size_t rsize = oldsize - nb;
3826*53ee8cc1Swenshuai.xi         newp = oldp;
3827*53ee8cc1Swenshuai.xi         if (rsize >= MIN_CHUNK_SIZE) {
3828*53ee8cc1Swenshuai.xi           mchunkptr remainder = chunk_plus_offset(newp, nb);
3829*53ee8cc1Swenshuai.xi           set_inuse(m, newp, nb);
3830*53ee8cc1Swenshuai.xi           set_inuse(m, remainder, rsize);
3831*53ee8cc1Swenshuai.xi           extra = chunk2mem(remainder);
3832*53ee8cc1Swenshuai.xi         }
3833*53ee8cc1Swenshuai.xi       }
3834*53ee8cc1Swenshuai.xi       else if (next == m->top && oldsize + m->topsize > nb) {
3835*53ee8cc1Swenshuai.xi         /* Expand into top */
3836*53ee8cc1Swenshuai.xi         size_t newsize = oldsize + m->topsize;
3837*53ee8cc1Swenshuai.xi         size_t newtopsize = newsize - nb;
3838*53ee8cc1Swenshuai.xi         mchunkptr newtop = chunk_plus_offset(oldp, nb);
3839*53ee8cc1Swenshuai.xi         set_inuse(m, oldp, nb);
3840*53ee8cc1Swenshuai.xi         newtop->head = newtopsize |PINUSE_BIT;
3841*53ee8cc1Swenshuai.xi         m->top = newtop;
3842*53ee8cc1Swenshuai.xi         m->topsize = newtopsize;
3843*53ee8cc1Swenshuai.xi         newp = oldp;
3844*53ee8cc1Swenshuai.xi       }
3845*53ee8cc1Swenshuai.xi     }
3846*53ee8cc1Swenshuai.xi     else {
3847*53ee8cc1Swenshuai.xi       USAGE_ERROR_ACTION(m, oldmem);
3848*53ee8cc1Swenshuai.xi       POSTACTION(m);
3849*53ee8cc1Swenshuai.xi       return 0;
3850*53ee8cc1Swenshuai.xi     }
3851*53ee8cc1Swenshuai.xi 
3852*53ee8cc1Swenshuai.xi     POSTACTION(m);
3853*53ee8cc1Swenshuai.xi 
3854*53ee8cc1Swenshuai.xi     if (newp != 0) {
3855*53ee8cc1Swenshuai.xi       if (extra != 0) {
3856*53ee8cc1Swenshuai.xi         internal_free(m, extra);
3857*53ee8cc1Swenshuai.xi       }
3858*53ee8cc1Swenshuai.xi       check_inuse_chunk(m, newp);
3859*53ee8cc1Swenshuai.xi       return chunk2mem(newp);
3860*53ee8cc1Swenshuai.xi     }
3861*53ee8cc1Swenshuai.xi     else {
3862*53ee8cc1Swenshuai.xi       void* newmem = internal_malloc(m, bytes);
3863*53ee8cc1Swenshuai.xi       if (newmem != 0) {
3864*53ee8cc1Swenshuai.xi         size_t oc = oldsize - overhead_for(oldp);
3865*53ee8cc1Swenshuai.xi         memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
3866*53ee8cc1Swenshuai.xi         internal_free(m, oldmem);
3867*53ee8cc1Swenshuai.xi       }
3868*53ee8cc1Swenshuai.xi       return newmem;
3869*53ee8cc1Swenshuai.xi     }
3870*53ee8cc1Swenshuai.xi   }
3871*53ee8cc1Swenshuai.xi   return 0;
3872*53ee8cc1Swenshuai.xi }
3873*53ee8cc1Swenshuai.xi 
3874*53ee8cc1Swenshuai.xi /* --------------------------- memalign support -------------------------- */
3875*53ee8cc1Swenshuai.xi 
internal_memalign(mstate m,size_t alignment,size_t bytes)3876*53ee8cc1Swenshuai.xi static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
3877*53ee8cc1Swenshuai.xi   if (alignment <= MALLOC_ALIGNMENT)    /* Can just use malloc */
3878*53ee8cc1Swenshuai.xi     return internal_malloc(m, bytes);
3879*53ee8cc1Swenshuai.xi   if (alignment <  MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
3880*53ee8cc1Swenshuai.xi     alignment = MIN_CHUNK_SIZE;
3881*53ee8cc1Swenshuai.xi   if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
3882*53ee8cc1Swenshuai.xi     size_t a = MALLOC_ALIGNMENT << 1;
3883*53ee8cc1Swenshuai.xi     while (a < alignment) a <<= 1;
3884*53ee8cc1Swenshuai.xi     alignment = a;
3885*53ee8cc1Swenshuai.xi   }
3886*53ee8cc1Swenshuai.xi 
3887*53ee8cc1Swenshuai.xi   if (bytes >= MAX_REQUEST - alignment) {
3888*53ee8cc1Swenshuai.xi     if (m != 0)  { /* Test isn't needed but avoids compiler warning */
3889*53ee8cc1Swenshuai.xi       MALLOC_FAILURE_ACTION;
3890*53ee8cc1Swenshuai.xi     }
3891*53ee8cc1Swenshuai.xi   }
3892*53ee8cc1Swenshuai.xi   else {
3893*53ee8cc1Swenshuai.xi     size_t nb = request2size(bytes);
3894*53ee8cc1Swenshuai.xi     size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
3895*53ee8cc1Swenshuai.xi     char* mem = (char*)internal_malloc(m, req);
3896*53ee8cc1Swenshuai.xi     if (mem != 0) {
3897*53ee8cc1Swenshuai.xi       void* leader = 0;
3898*53ee8cc1Swenshuai.xi       void* trailer = 0;
3899*53ee8cc1Swenshuai.xi       mchunkptr p = mem2chunk(mem);
3900*53ee8cc1Swenshuai.xi 
3901*53ee8cc1Swenshuai.xi       if (PREACTION(m)) return 0;
3902*53ee8cc1Swenshuai.xi       if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
3903*53ee8cc1Swenshuai.xi         /*
3904*53ee8cc1Swenshuai.xi           Find an aligned spot inside chunk.  Since we need to give
3905*53ee8cc1Swenshuai.xi           back leading space in a chunk of at least MIN_CHUNK_SIZE, if
3906*53ee8cc1Swenshuai.xi           the first calculation places us at a spot with less than
3907*53ee8cc1Swenshuai.xi           MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
3908*53ee8cc1Swenshuai.xi           We've allocated enough total room so that this is always
3909*53ee8cc1Swenshuai.xi           possible.
3910*53ee8cc1Swenshuai.xi         */
3911*53ee8cc1Swenshuai.xi         char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
3912*53ee8cc1Swenshuai.xi                                                        alignment -
3913*53ee8cc1Swenshuai.xi                                                        SIZE_T_ONE)) &
3914*53ee8cc1Swenshuai.xi                                              -alignment));
3915*53ee8cc1Swenshuai.xi         char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
3916*53ee8cc1Swenshuai.xi           br : br+alignment;
3917*53ee8cc1Swenshuai.xi         mchunkptr newp = (mchunkptr)((unsigned long)pos);
3918*53ee8cc1Swenshuai.xi         size_t leadsize = pos - (char*)(p);
3919*53ee8cc1Swenshuai.xi         size_t newsize = chunksize(p) - leadsize;
3920*53ee8cc1Swenshuai.xi 
3921*53ee8cc1Swenshuai.xi         if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
3922*53ee8cc1Swenshuai.xi           newp->prev_foot = p->prev_foot + leadsize;
3923*53ee8cc1Swenshuai.xi           newp->head = (newsize|CINUSE_BIT);
3924*53ee8cc1Swenshuai.xi         }
3925*53ee8cc1Swenshuai.xi         else { /* Otherwise, give back leader, use the rest */
3926*53ee8cc1Swenshuai.xi           set_inuse(m, newp, newsize);
3927*53ee8cc1Swenshuai.xi           set_inuse(m, p, leadsize);
3928*53ee8cc1Swenshuai.xi           leader = chunk2mem(p);
3929*53ee8cc1Swenshuai.xi         }
3930*53ee8cc1Swenshuai.xi         p = newp;
3931*53ee8cc1Swenshuai.xi       }
3932*53ee8cc1Swenshuai.xi 
3933*53ee8cc1Swenshuai.xi       /* Give back spare room at the end */
3934*53ee8cc1Swenshuai.xi       if (!is_mmapped(p)) {
3935*53ee8cc1Swenshuai.xi         size_t size = chunksize(p);
3936*53ee8cc1Swenshuai.xi         if (size > nb + MIN_CHUNK_SIZE) {
3937*53ee8cc1Swenshuai.xi           size_t remainder_size = size - nb;
3938*53ee8cc1Swenshuai.xi           mchunkptr remainder = chunk_plus_offset(p, nb);
3939*53ee8cc1Swenshuai.xi           set_inuse(m, p, nb);
3940*53ee8cc1Swenshuai.xi           set_inuse(m, remainder, remainder_size);
3941*53ee8cc1Swenshuai.xi           trailer = chunk2mem(remainder);
3942*53ee8cc1Swenshuai.xi         }
3943*53ee8cc1Swenshuai.xi       }
3944*53ee8cc1Swenshuai.xi 
3945*53ee8cc1Swenshuai.xi       assert (chunksize(p) >= nb);
3946*53ee8cc1Swenshuai.xi       assert((((size_t)(chunk2mem(p))) % alignment) == 0);
3947*53ee8cc1Swenshuai.xi       check_inuse_chunk(m, p);
3948*53ee8cc1Swenshuai.xi       POSTACTION(m);
3949*53ee8cc1Swenshuai.xi       if (leader != 0) {
3950*53ee8cc1Swenshuai.xi         internal_free(m, leader);
3951*53ee8cc1Swenshuai.xi       }
3952*53ee8cc1Swenshuai.xi       if (trailer != 0) {
3953*53ee8cc1Swenshuai.xi         internal_free(m, trailer);
3954*53ee8cc1Swenshuai.xi       }
3955*53ee8cc1Swenshuai.xi       return chunk2mem(p);
3956*53ee8cc1Swenshuai.xi     }
3957*53ee8cc1Swenshuai.xi   }
3958*53ee8cc1Swenshuai.xi   return 0;
3959*53ee8cc1Swenshuai.xi }
3960*53ee8cc1Swenshuai.xi 
3961*53ee8cc1Swenshuai.xi /* ------------------------ comalloc/coalloc support --------------------- */
3962*53ee8cc1Swenshuai.xi 
ialloc(mstate m,size_t n_elements,size_t * sizes,int opts,void * chunks[])3963*53ee8cc1Swenshuai.xi static void** ialloc(mstate m,
3964*53ee8cc1Swenshuai.xi                      size_t n_elements,
3965*53ee8cc1Swenshuai.xi                      size_t* sizes,
3966*53ee8cc1Swenshuai.xi                      int opts,
3967*53ee8cc1Swenshuai.xi                      void* chunks[]) {
3968*53ee8cc1Swenshuai.xi   /*
3969*53ee8cc1Swenshuai.xi     This provides common support for independent_X routines, handling
3970*53ee8cc1Swenshuai.xi     all of the combinations that can result.
3971*53ee8cc1Swenshuai.xi 
3972*53ee8cc1Swenshuai.xi     The opts arg has:
3973*53ee8cc1Swenshuai.xi     bit 0 set if all elements are same size (using sizes[0])
3974*53ee8cc1Swenshuai.xi     bit 1 set if elements should be zeroed
3975*53ee8cc1Swenshuai.xi   */
3976*53ee8cc1Swenshuai.xi 
3977*53ee8cc1Swenshuai.xi   size_t    element_size;   /* chunksize of each element, if all same */
3978*53ee8cc1Swenshuai.xi   size_t    contents_size;  /* total size of elements */
3979*53ee8cc1Swenshuai.xi   size_t    array_size;     /* request size of pointer array */
3980*53ee8cc1Swenshuai.xi   void*     mem;            /* malloced aggregate space */
3981*53ee8cc1Swenshuai.xi   mchunkptr p;              /* corresponding chunk */
3982*53ee8cc1Swenshuai.xi   size_t    remainder_size; /* remaining bytes while splitting */
3983*53ee8cc1Swenshuai.xi   void**    marray;         /* either "chunks" or malloced ptr array */
3984*53ee8cc1Swenshuai.xi   mchunkptr array_chunk;    /* chunk for malloced ptr array */
3985*53ee8cc1Swenshuai.xi   flag_t    was_enabled;    /* to disable mmap */
3986*53ee8cc1Swenshuai.xi   size_t    size;
3987*53ee8cc1Swenshuai.xi   size_t    i;
3988*53ee8cc1Swenshuai.xi 
3989*53ee8cc1Swenshuai.xi   /* compute array length, if needed */
3990*53ee8cc1Swenshuai.xi   if (chunks != 0) {
3991*53ee8cc1Swenshuai.xi     if (n_elements == 0)
3992*53ee8cc1Swenshuai.xi       return chunks; /* nothing to do */
3993*53ee8cc1Swenshuai.xi     marray = chunks;
3994*53ee8cc1Swenshuai.xi     array_size = 0;
3995*53ee8cc1Swenshuai.xi   }
3996*53ee8cc1Swenshuai.xi   else {
3997*53ee8cc1Swenshuai.xi     /* if empty req, must still return chunk representing empty array */
3998*53ee8cc1Swenshuai.xi     if (n_elements == 0)
3999*53ee8cc1Swenshuai.xi       return (void**)internal_malloc(m, 0);
4000*53ee8cc1Swenshuai.xi     marray = 0;
4001*53ee8cc1Swenshuai.xi     array_size = request2size(n_elements * (sizeof(void*)));
4002*53ee8cc1Swenshuai.xi   }
4003*53ee8cc1Swenshuai.xi 
4004*53ee8cc1Swenshuai.xi   /* compute total element size */
4005*53ee8cc1Swenshuai.xi   if (opts & 0x1) { /* all-same-size */
4006*53ee8cc1Swenshuai.xi     element_size = request2size(*sizes);
4007*53ee8cc1Swenshuai.xi     contents_size = n_elements * element_size;
4008*53ee8cc1Swenshuai.xi   }
4009*53ee8cc1Swenshuai.xi   else { /* add up all the sizes */
4010*53ee8cc1Swenshuai.xi     element_size = 0;
4011*53ee8cc1Swenshuai.xi     contents_size = 0;
4012*53ee8cc1Swenshuai.xi     for (i = 0; i != n_elements; ++i)
4013*53ee8cc1Swenshuai.xi       contents_size += request2size(sizes[i]);
4014*53ee8cc1Swenshuai.xi   }
4015*53ee8cc1Swenshuai.xi 
4016*53ee8cc1Swenshuai.xi   size = contents_size + array_size;
4017*53ee8cc1Swenshuai.xi 
4018*53ee8cc1Swenshuai.xi   /*
4019*53ee8cc1Swenshuai.xi      Allocate the aggregate chunk.  First disable direct-mmapping so
4020*53ee8cc1Swenshuai.xi      malloc won't use it, since we would not be able to later
4021*53ee8cc1Swenshuai.xi      free/realloc space internal to a segregated mmap region.
4022*53ee8cc1Swenshuai.xi   */
4023*53ee8cc1Swenshuai.xi   was_enabled = use_mmap(m);
4024*53ee8cc1Swenshuai.xi   disable_mmap(m);
4025*53ee8cc1Swenshuai.xi   mem = internal_malloc(m, size - CHUNK_OVERHEAD);
4026*53ee8cc1Swenshuai.xi   if (was_enabled)
4027*53ee8cc1Swenshuai.xi     enable_mmap(m);
4028*53ee8cc1Swenshuai.xi   if (mem == 0)
4029*53ee8cc1Swenshuai.xi     return 0;
4030*53ee8cc1Swenshuai.xi 
4031*53ee8cc1Swenshuai.xi   if (PREACTION(m)) return 0;
4032*53ee8cc1Swenshuai.xi   p = mem2chunk(mem);
4033*53ee8cc1Swenshuai.xi   remainder_size = chunksize(p);
4034*53ee8cc1Swenshuai.xi 
4035*53ee8cc1Swenshuai.xi   assert(!is_mmapped(p));
4036*53ee8cc1Swenshuai.xi 
4037*53ee8cc1Swenshuai.xi   if (opts & 0x2) {       /* optionally clear the elements */
4038*53ee8cc1Swenshuai.xi     memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
4039*53ee8cc1Swenshuai.xi   }
4040*53ee8cc1Swenshuai.xi 
4041*53ee8cc1Swenshuai.xi   /* If not provided, allocate the pointer array as final part of chunk */
4042*53ee8cc1Swenshuai.xi   if (marray == 0) {
4043*53ee8cc1Swenshuai.xi     size_t  array_chunk_size;
4044*53ee8cc1Swenshuai.xi     array_chunk = chunk_plus_offset(p, contents_size);
4045*53ee8cc1Swenshuai.xi     array_chunk_size = remainder_size - contents_size;
4046*53ee8cc1Swenshuai.xi     marray = (void**) (chunk2mem(array_chunk));
4047*53ee8cc1Swenshuai.xi     set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
4048*53ee8cc1Swenshuai.xi     remainder_size = contents_size;
4049*53ee8cc1Swenshuai.xi   }
4050*53ee8cc1Swenshuai.xi 
4051*53ee8cc1Swenshuai.xi   /* split out elements */
4052*53ee8cc1Swenshuai.xi   for (i = 0; ; ++i) {
4053*53ee8cc1Swenshuai.xi     marray[i] = chunk2mem(p);
4054*53ee8cc1Swenshuai.xi     if (i != n_elements-1) {
4055*53ee8cc1Swenshuai.xi       if (element_size != 0)
4056*53ee8cc1Swenshuai.xi         size = element_size;
4057*53ee8cc1Swenshuai.xi       else
4058*53ee8cc1Swenshuai.xi         size = request2size(sizes[i]);
4059*53ee8cc1Swenshuai.xi       remainder_size -= size;
4060*53ee8cc1Swenshuai.xi       set_size_and_pinuse_of_inuse_chunk(m, p, size);
4061*53ee8cc1Swenshuai.xi       p = chunk_plus_offset(p, size);
4062*53ee8cc1Swenshuai.xi     }
4063*53ee8cc1Swenshuai.xi     else { /* the final element absorbs any overallocation slop */
4064*53ee8cc1Swenshuai.xi       set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
4065*53ee8cc1Swenshuai.xi       break;
4066*53ee8cc1Swenshuai.xi     }
4067*53ee8cc1Swenshuai.xi   }
4068*53ee8cc1Swenshuai.xi 
4069*53ee8cc1Swenshuai.xi #if DEBUG
4070*53ee8cc1Swenshuai.xi   if (marray != chunks) {
4071*53ee8cc1Swenshuai.xi     /* final element must have exactly exhausted chunk */
4072*53ee8cc1Swenshuai.xi     if (element_size != 0) {
4073*53ee8cc1Swenshuai.xi       assert(remainder_size == element_size);
4074*53ee8cc1Swenshuai.xi     }
4075*53ee8cc1Swenshuai.xi     else {
4076*53ee8cc1Swenshuai.xi       assert(remainder_size == request2size(sizes[i]));
4077*53ee8cc1Swenshuai.xi     }
4078*53ee8cc1Swenshuai.xi     check_inuse_chunk(m, mem2chunk(marray));
4079*53ee8cc1Swenshuai.xi   }
4080*53ee8cc1Swenshuai.xi   for (i = 0; i != n_elements; ++i)
4081*53ee8cc1Swenshuai.xi     check_inuse_chunk(m, mem2chunk(marray[i]));
4082*53ee8cc1Swenshuai.xi 
4083*53ee8cc1Swenshuai.xi #endif /* DEBUG */
4084*53ee8cc1Swenshuai.xi 
4085*53ee8cc1Swenshuai.xi   POSTACTION(m);
4086*53ee8cc1Swenshuai.xi   return marray;
4087*53ee8cc1Swenshuai.xi }
4088*53ee8cc1Swenshuai.xi 
4089*53ee8cc1Swenshuai.xi 
4090*53ee8cc1Swenshuai.xi /* -------------------------- public routines ---------------------------- */
4091*53ee8cc1Swenshuai.xi 
4092*53ee8cc1Swenshuai.xi #if !ONLY_MSPACES
4093*53ee8cc1Swenshuai.xi 
dlmalloc(size_t bytes)4094*53ee8cc1Swenshuai.xi void* dlmalloc(size_t bytes) {
4095*53ee8cc1Swenshuai.xi   /*
4096*53ee8cc1Swenshuai.xi      Basic algorithm:
4097*53ee8cc1Swenshuai.xi      If a small request (< 256 bytes minus per-chunk overhead):
4098*53ee8cc1Swenshuai.xi        1. If one exists, use a remainderless chunk in associated smallbin.
4099*53ee8cc1Swenshuai.xi           (Remainderless means that there are too few excess bytes to
4100*53ee8cc1Swenshuai.xi           represent as a chunk.)
4101*53ee8cc1Swenshuai.xi        2. If it is big enough, use the dv chunk, which is normally the
4102*53ee8cc1Swenshuai.xi           chunk adjacent to the one used for the most recent small request.
4103*53ee8cc1Swenshuai.xi        3. If one exists, split the smallest available chunk in a bin,
4104*53ee8cc1Swenshuai.xi           saving remainder in dv.
4105*53ee8cc1Swenshuai.xi        4. If it is big enough, use the top chunk.
4106*53ee8cc1Swenshuai.xi        5. If available, get memory from system and use it
4107*53ee8cc1Swenshuai.xi      Otherwise, for a large request:
4108*53ee8cc1Swenshuai.xi        1. Find the smallest available binned chunk that fits, and use it
4109*53ee8cc1Swenshuai.xi           if it is better fitting than dv chunk, splitting if necessary.
4110*53ee8cc1Swenshuai.xi        2. If better fitting than any binned chunk, use the dv chunk.
4111*53ee8cc1Swenshuai.xi        3. If it is big enough, use the top chunk.
4112*53ee8cc1Swenshuai.xi        4. If request size >= mmap threshold, try to directly mmap this chunk.
4113*53ee8cc1Swenshuai.xi        5. If available, get memory from system and use it
4114*53ee8cc1Swenshuai.xi 
4115*53ee8cc1Swenshuai.xi      The ugly goto's here ensure that postaction occurs along all paths.
4116*53ee8cc1Swenshuai.xi   */
4117*53ee8cc1Swenshuai.xi 
4118*53ee8cc1Swenshuai.xi   if (!PREACTION(gm)) {
4119*53ee8cc1Swenshuai.xi     void* mem;
4120*53ee8cc1Swenshuai.xi     size_t nb;
4121*53ee8cc1Swenshuai.xi     if (bytes <= MAX_SMALL_REQUEST) {
4122*53ee8cc1Swenshuai.xi       bindex_t idx;
4123*53ee8cc1Swenshuai.xi       binmap_t smallbits;
4124*53ee8cc1Swenshuai.xi       nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4125*53ee8cc1Swenshuai.xi       idx = small_index(nb);
4126*53ee8cc1Swenshuai.xi       smallbits = gm->smallmap >> idx;
4127*53ee8cc1Swenshuai.xi 
4128*53ee8cc1Swenshuai.xi       if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4129*53ee8cc1Swenshuai.xi         mchunkptr b, p;
4130*53ee8cc1Swenshuai.xi         idx += ~smallbits & 1;       /* Uses next bin if idx empty */
4131*53ee8cc1Swenshuai.xi         b = smallbin_at(gm, idx);
4132*53ee8cc1Swenshuai.xi         p = b->fd;
4133*53ee8cc1Swenshuai.xi         assert(chunksize(p) == small_index2size(idx));
4134*53ee8cc1Swenshuai.xi         unlink_first_small_chunk(gm, b, p, idx);
4135*53ee8cc1Swenshuai.xi         set_inuse_and_pinuse(gm, p, small_index2size(idx));
4136*53ee8cc1Swenshuai.xi         mem = chunk2mem(p);
4137*53ee8cc1Swenshuai.xi         check_malloced_chunk(gm, mem, nb);
4138*53ee8cc1Swenshuai.xi         goto postaction;
4139*53ee8cc1Swenshuai.xi       }
4140*53ee8cc1Swenshuai.xi 
4141*53ee8cc1Swenshuai.xi       else if (nb > gm->dvsize) {
4142*53ee8cc1Swenshuai.xi         if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4143*53ee8cc1Swenshuai.xi           mchunkptr b, p, r;
4144*53ee8cc1Swenshuai.xi           size_t rsize;
4145*53ee8cc1Swenshuai.xi           bindex_t i;
4146*53ee8cc1Swenshuai.xi           binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4147*53ee8cc1Swenshuai.xi           binmap_t leastbit = least_bit(leftbits);
4148*53ee8cc1Swenshuai.xi           compute_bit2idx(leastbit, i);
4149*53ee8cc1Swenshuai.xi           b = smallbin_at(gm, i);
4150*53ee8cc1Swenshuai.xi           p = b->fd;
4151*53ee8cc1Swenshuai.xi           assert(chunksize(p) == small_index2size(i));
4152*53ee8cc1Swenshuai.xi           unlink_first_small_chunk(gm, b, p, i);
4153*53ee8cc1Swenshuai.xi           rsize = small_index2size(i) - nb;
4154*53ee8cc1Swenshuai.xi           /* Fit here cannot be remainderless if 4byte sizes */
4155*53ee8cc1Swenshuai.xi           if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4156*53ee8cc1Swenshuai.xi             set_inuse_and_pinuse(gm, p, small_index2size(i));
4157*53ee8cc1Swenshuai.xi           else {
4158*53ee8cc1Swenshuai.xi             set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4159*53ee8cc1Swenshuai.xi             r = chunk_plus_offset(p, nb);
4160*53ee8cc1Swenshuai.xi             set_size_and_pinuse_of_free_chunk(r, rsize);
4161*53ee8cc1Swenshuai.xi             replace_dv(gm, r, rsize);
4162*53ee8cc1Swenshuai.xi           }
4163*53ee8cc1Swenshuai.xi           mem = chunk2mem(p);
4164*53ee8cc1Swenshuai.xi           check_malloced_chunk(gm, mem, nb);
4165*53ee8cc1Swenshuai.xi           goto postaction;
4166*53ee8cc1Swenshuai.xi         }
4167*53ee8cc1Swenshuai.xi 
4168*53ee8cc1Swenshuai.xi         else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
4169*53ee8cc1Swenshuai.xi           check_malloced_chunk(gm, mem, nb);
4170*53ee8cc1Swenshuai.xi           goto postaction;
4171*53ee8cc1Swenshuai.xi         }
4172*53ee8cc1Swenshuai.xi       }
4173*53ee8cc1Swenshuai.xi     }
4174*53ee8cc1Swenshuai.xi     else if (bytes >= MAX_REQUEST)
4175*53ee8cc1Swenshuai.xi       nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4176*53ee8cc1Swenshuai.xi     else {
4177*53ee8cc1Swenshuai.xi       nb = pad_request(bytes);
4178*53ee8cc1Swenshuai.xi       if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
4179*53ee8cc1Swenshuai.xi         check_malloced_chunk(gm, mem, nb);
4180*53ee8cc1Swenshuai.xi         goto postaction;
4181*53ee8cc1Swenshuai.xi       }
4182*53ee8cc1Swenshuai.xi     }
4183*53ee8cc1Swenshuai.xi 
4184*53ee8cc1Swenshuai.xi     if (nb <= gm->dvsize) {
4185*53ee8cc1Swenshuai.xi       size_t rsize = gm->dvsize - nb;
4186*53ee8cc1Swenshuai.xi       mchunkptr p = gm->dv;
4187*53ee8cc1Swenshuai.xi       if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4188*53ee8cc1Swenshuai.xi         mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
4189*53ee8cc1Swenshuai.xi         gm->dvsize = rsize;
4190*53ee8cc1Swenshuai.xi         set_size_and_pinuse_of_free_chunk(r, rsize);
4191*53ee8cc1Swenshuai.xi         set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4192*53ee8cc1Swenshuai.xi       }
4193*53ee8cc1Swenshuai.xi       else { /* exhaust dv */
4194*53ee8cc1Swenshuai.xi         size_t dvs = gm->dvsize;
4195*53ee8cc1Swenshuai.xi         gm->dvsize = 0;
4196*53ee8cc1Swenshuai.xi         gm->dv = 0;
4197*53ee8cc1Swenshuai.xi         set_inuse_and_pinuse(gm, p, dvs);
4198*53ee8cc1Swenshuai.xi       }
4199*53ee8cc1Swenshuai.xi       mem = chunk2mem(p);
4200*53ee8cc1Swenshuai.xi       check_malloced_chunk(gm, mem, nb);
4201*53ee8cc1Swenshuai.xi       goto postaction;
4202*53ee8cc1Swenshuai.xi     }
4203*53ee8cc1Swenshuai.xi 
4204*53ee8cc1Swenshuai.xi     else if (nb < gm->topsize) { /* Split top */
4205*53ee8cc1Swenshuai.xi       size_t rsize = gm->topsize -= nb;
4206*53ee8cc1Swenshuai.xi       mchunkptr p = gm->top;
4207*53ee8cc1Swenshuai.xi       mchunkptr r = gm->top = chunk_plus_offset(p, nb);
4208*53ee8cc1Swenshuai.xi       r->head = rsize | PINUSE_BIT;
4209*53ee8cc1Swenshuai.xi       set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4210*53ee8cc1Swenshuai.xi       mem = chunk2mem(p);
4211*53ee8cc1Swenshuai.xi       check_top_chunk(gm, gm->top);
4212*53ee8cc1Swenshuai.xi       check_malloced_chunk(gm, mem, nb);
4213*53ee8cc1Swenshuai.xi       goto postaction;
4214*53ee8cc1Swenshuai.xi     }
4215*53ee8cc1Swenshuai.xi 
4216*53ee8cc1Swenshuai.xi     mem = sys_alloc(gm, nb);
4217*53ee8cc1Swenshuai.xi 
4218*53ee8cc1Swenshuai.xi   postaction:
4219*53ee8cc1Swenshuai.xi     POSTACTION(gm);
4220*53ee8cc1Swenshuai.xi     return mem;
4221*53ee8cc1Swenshuai.xi   }
4222*53ee8cc1Swenshuai.xi 
4223*53ee8cc1Swenshuai.xi   return 0;
4224*53ee8cc1Swenshuai.xi }
4225*53ee8cc1Swenshuai.xi 
dlfree(void * mem)4226*53ee8cc1Swenshuai.xi void dlfree(void* mem) {
4227*53ee8cc1Swenshuai.xi   /*
4228*53ee8cc1Swenshuai.xi      Consolidate freed chunks with preceeding or succeeding bordering
4229*53ee8cc1Swenshuai.xi      free chunks, if they exist, and then place in a bin.  Intermixed
4230*53ee8cc1Swenshuai.xi      with special cases for top, dv, mmapped chunks, and usage errors.
4231*53ee8cc1Swenshuai.xi   */
4232*53ee8cc1Swenshuai.xi 
4233*53ee8cc1Swenshuai.xi   if (mem != 0) {
4234*53ee8cc1Swenshuai.xi     mchunkptr p  = mem2chunk(mem);
4235*53ee8cc1Swenshuai.xi #if FOOTERS
4236*53ee8cc1Swenshuai.xi     mstate fm = get_mstate_for(p);
4237*53ee8cc1Swenshuai.xi     if (!ok_magic(fm)) {
4238*53ee8cc1Swenshuai.xi       USAGE_ERROR_ACTION(fm, p);
4239*53ee8cc1Swenshuai.xi       return;
4240*53ee8cc1Swenshuai.xi     }
4241*53ee8cc1Swenshuai.xi #else /* FOOTERS */
4242*53ee8cc1Swenshuai.xi #define fm gm
4243*53ee8cc1Swenshuai.xi #endif /* FOOTERS */
4244*53ee8cc1Swenshuai.xi     if (!PREACTION(fm)) {
4245*53ee8cc1Swenshuai.xi       check_inuse_chunk(fm, p);
4246*53ee8cc1Swenshuai.xi       if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4247*53ee8cc1Swenshuai.xi         size_t psize = chunksize(p);
4248*53ee8cc1Swenshuai.xi         mchunkptr next = chunk_plus_offset(p, psize);
4249*53ee8cc1Swenshuai.xi         if (!pinuse(p)) {
4250*53ee8cc1Swenshuai.xi           size_t prevsize = p->prev_foot;
4251*53ee8cc1Swenshuai.xi           if ((prevsize & IS_MMAPPED_BIT) != 0) {
4252*53ee8cc1Swenshuai.xi             prevsize &= ~IS_MMAPPED_BIT;
4253*53ee8cc1Swenshuai.xi             psize += prevsize + MMAP_FOOT_PAD;
4254*53ee8cc1Swenshuai.xi             if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4255*53ee8cc1Swenshuai.xi               fm->footprint -= psize;
4256*53ee8cc1Swenshuai.xi             goto postaction;
4257*53ee8cc1Swenshuai.xi           }
4258*53ee8cc1Swenshuai.xi           else {
4259*53ee8cc1Swenshuai.xi             mchunkptr prev = chunk_minus_offset(p, prevsize);
4260*53ee8cc1Swenshuai.xi             psize += prevsize;
4261*53ee8cc1Swenshuai.xi             p = prev;
4262*53ee8cc1Swenshuai.xi             if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4263*53ee8cc1Swenshuai.xi               if (p != fm->dv) {
4264*53ee8cc1Swenshuai.xi                 unlink_chunk(fm, p, prevsize);
4265*53ee8cc1Swenshuai.xi               }
4266*53ee8cc1Swenshuai.xi               else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4267*53ee8cc1Swenshuai.xi                 fm->dvsize = psize;
4268*53ee8cc1Swenshuai.xi                 set_free_with_pinuse(p, psize, next);
4269*53ee8cc1Swenshuai.xi                 goto postaction;
4270*53ee8cc1Swenshuai.xi               }
4271*53ee8cc1Swenshuai.xi             }
4272*53ee8cc1Swenshuai.xi             else
4273*53ee8cc1Swenshuai.xi               goto erroraction;
4274*53ee8cc1Swenshuai.xi           }
4275*53ee8cc1Swenshuai.xi         }
4276*53ee8cc1Swenshuai.xi 
4277*53ee8cc1Swenshuai.xi         if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4278*53ee8cc1Swenshuai.xi           if (!cinuse(next)) {  /* consolidate forward */
4279*53ee8cc1Swenshuai.xi             if (next == fm->top) {
4280*53ee8cc1Swenshuai.xi               size_t tsize = fm->topsize += psize;
4281*53ee8cc1Swenshuai.xi               fm->top = p;
4282*53ee8cc1Swenshuai.xi               p->head = tsize | PINUSE_BIT;
4283*53ee8cc1Swenshuai.xi               if (p == fm->dv) {
4284*53ee8cc1Swenshuai.xi                 fm->dv = 0;
4285*53ee8cc1Swenshuai.xi                 fm->dvsize = 0;
4286*53ee8cc1Swenshuai.xi               }
4287*53ee8cc1Swenshuai.xi               if (should_trim(fm, tsize))
4288*53ee8cc1Swenshuai.xi                 sys_trim(fm, 0);
4289*53ee8cc1Swenshuai.xi               goto postaction;
4290*53ee8cc1Swenshuai.xi             }
4291*53ee8cc1Swenshuai.xi             else if (next == fm->dv) {
4292*53ee8cc1Swenshuai.xi               size_t dsize = fm->dvsize += psize;
4293*53ee8cc1Swenshuai.xi               fm->dv = p;
4294*53ee8cc1Swenshuai.xi               set_size_and_pinuse_of_free_chunk(p, dsize);
4295*53ee8cc1Swenshuai.xi               goto postaction;
4296*53ee8cc1Swenshuai.xi             }
4297*53ee8cc1Swenshuai.xi             else {
4298*53ee8cc1Swenshuai.xi               size_t nsize = chunksize(next);
4299*53ee8cc1Swenshuai.xi               psize += nsize;
4300*53ee8cc1Swenshuai.xi               unlink_chunk(fm, next, nsize);
4301*53ee8cc1Swenshuai.xi               set_size_and_pinuse_of_free_chunk(p, psize);
4302*53ee8cc1Swenshuai.xi               if (p == fm->dv) {
4303*53ee8cc1Swenshuai.xi                 fm->dvsize = psize;
4304*53ee8cc1Swenshuai.xi                 goto postaction;
4305*53ee8cc1Swenshuai.xi               }
4306*53ee8cc1Swenshuai.xi             }
4307*53ee8cc1Swenshuai.xi           }
4308*53ee8cc1Swenshuai.xi           else
4309*53ee8cc1Swenshuai.xi             set_free_with_pinuse(p, psize, next);
4310*53ee8cc1Swenshuai.xi           insert_chunk(fm, p, psize);
4311*53ee8cc1Swenshuai.xi           check_free_chunk(fm, p);
4312*53ee8cc1Swenshuai.xi           goto postaction;
4313*53ee8cc1Swenshuai.xi         }
4314*53ee8cc1Swenshuai.xi       }
4315*53ee8cc1Swenshuai.xi     erroraction:
4316*53ee8cc1Swenshuai.xi       USAGE_ERROR_ACTION(fm, p);
4317*53ee8cc1Swenshuai.xi     postaction:
4318*53ee8cc1Swenshuai.xi       POSTACTION(fm);
4319*53ee8cc1Swenshuai.xi     }
4320*53ee8cc1Swenshuai.xi   }
4321*53ee8cc1Swenshuai.xi #if !FOOTERS
4322*53ee8cc1Swenshuai.xi #undef fm
4323*53ee8cc1Swenshuai.xi #endif /* FOOTERS */
4324*53ee8cc1Swenshuai.xi }
4325*53ee8cc1Swenshuai.xi 
dlcalloc(size_t n_elements,size_t elem_size)4326*53ee8cc1Swenshuai.xi void* dlcalloc(size_t n_elements, size_t elem_size) {
4327*53ee8cc1Swenshuai.xi   void* mem;
4328*53ee8cc1Swenshuai.xi   size_t req = 0;
4329*53ee8cc1Swenshuai.xi   if (n_elements != 0) {
4330*53ee8cc1Swenshuai.xi     req = n_elements * elem_size;
4331*53ee8cc1Swenshuai.xi     if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4332*53ee8cc1Swenshuai.xi         (req / n_elements != elem_size))
4333*53ee8cc1Swenshuai.xi       req = MAX_SIZE_T; /* force downstream failure on overflow */
4334*53ee8cc1Swenshuai.xi   }
4335*53ee8cc1Swenshuai.xi   mem = dlmalloc(req);
4336*53ee8cc1Swenshuai.xi   if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4337*53ee8cc1Swenshuai.xi     memset(mem, 0, req);
4338*53ee8cc1Swenshuai.xi   return mem;
4339*53ee8cc1Swenshuai.xi }
4340*53ee8cc1Swenshuai.xi 
dlrealloc(void * oldmem,size_t bytes)4341*53ee8cc1Swenshuai.xi void* dlrealloc(void* oldmem, size_t bytes) {
4342*53ee8cc1Swenshuai.xi   if (oldmem == 0)
4343*53ee8cc1Swenshuai.xi     return dlmalloc(bytes);
4344*53ee8cc1Swenshuai.xi #ifdef REALLOC_ZERO_BYTES_FREES
4345*53ee8cc1Swenshuai.xi   if (bytes == 0) {
4346*53ee8cc1Swenshuai.xi     dlfree(oldmem);
4347*53ee8cc1Swenshuai.xi     return 0;
4348*53ee8cc1Swenshuai.xi   }
4349*53ee8cc1Swenshuai.xi #endif /* REALLOC_ZERO_BYTES_FREES */
4350*53ee8cc1Swenshuai.xi   else {
4351*53ee8cc1Swenshuai.xi #if ! FOOTERS
4352*53ee8cc1Swenshuai.xi     mstate m = gm;
4353*53ee8cc1Swenshuai.xi #else /* FOOTERS */
4354*53ee8cc1Swenshuai.xi     mstate m = get_mstate_for(mem2chunk(oldmem));
4355*53ee8cc1Swenshuai.xi     if (!ok_magic(m)) {
4356*53ee8cc1Swenshuai.xi       USAGE_ERROR_ACTION(m, oldmem);
4357*53ee8cc1Swenshuai.xi       return 0;
4358*53ee8cc1Swenshuai.xi     }
4359*53ee8cc1Swenshuai.xi #endif /* FOOTERS */
4360*53ee8cc1Swenshuai.xi     return internal_realloc(m, oldmem, bytes);
4361*53ee8cc1Swenshuai.xi   }
4362*53ee8cc1Swenshuai.xi }
4363*53ee8cc1Swenshuai.xi 
dlmemalign(size_t alignment,size_t bytes)4364*53ee8cc1Swenshuai.xi void* dlmemalign(size_t alignment, size_t bytes) {
4365*53ee8cc1Swenshuai.xi   return internal_memalign(gm, alignment, bytes);
4366*53ee8cc1Swenshuai.xi }
4367*53ee8cc1Swenshuai.xi 
dlindependent_calloc(size_t n_elements,size_t elem_size,void * chunks[])4368*53ee8cc1Swenshuai.xi void** dlindependent_calloc(size_t n_elements, size_t elem_size,
4369*53ee8cc1Swenshuai.xi                                  void* chunks[]) {
4370*53ee8cc1Swenshuai.xi   size_t sz = elem_size; /* serves as 1-element array */
4371*53ee8cc1Swenshuai.xi   return ialloc(gm, n_elements, &sz, 3, chunks);
4372*53ee8cc1Swenshuai.xi }
4373*53ee8cc1Swenshuai.xi 
dlindependent_comalloc(size_t n_elements,size_t sizes[],void * chunks[])4374*53ee8cc1Swenshuai.xi void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
4375*53ee8cc1Swenshuai.xi                                    void* chunks[]) {
4376*53ee8cc1Swenshuai.xi   return ialloc(gm, n_elements, sizes, 0, chunks);
4377*53ee8cc1Swenshuai.xi }
4378*53ee8cc1Swenshuai.xi 
dlvalloc(size_t bytes)4379*53ee8cc1Swenshuai.xi void* dlvalloc(size_t bytes) {
4380*53ee8cc1Swenshuai.xi   size_t pagesz;
4381*53ee8cc1Swenshuai.xi   init_mparams();
4382*53ee8cc1Swenshuai.xi   pagesz = mparams.page_size;
4383*53ee8cc1Swenshuai.xi   return dlmemalign(pagesz, bytes);
4384*53ee8cc1Swenshuai.xi }
4385*53ee8cc1Swenshuai.xi 
dlpvalloc(size_t bytes)4386*53ee8cc1Swenshuai.xi void* dlpvalloc(size_t bytes) {
4387*53ee8cc1Swenshuai.xi   size_t pagesz;
4388*53ee8cc1Swenshuai.xi   init_mparams();
4389*53ee8cc1Swenshuai.xi   pagesz = mparams.page_size;
4390*53ee8cc1Swenshuai.xi   return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
4391*53ee8cc1Swenshuai.xi }
4392*53ee8cc1Swenshuai.xi 
dlmalloc_trim(size_t pad)4393*53ee8cc1Swenshuai.xi int dlmalloc_trim(size_t pad) {
4394*53ee8cc1Swenshuai.xi   int result = 0;
4395*53ee8cc1Swenshuai.xi   if (!PREACTION(gm)) {
4396*53ee8cc1Swenshuai.xi     result = sys_trim(gm, pad);
4397*53ee8cc1Swenshuai.xi     POSTACTION(gm);
4398*53ee8cc1Swenshuai.xi   }
4399*53ee8cc1Swenshuai.xi   return result;
4400*53ee8cc1Swenshuai.xi }
4401*53ee8cc1Swenshuai.xi 
dlmalloc_footprint(void)4402*53ee8cc1Swenshuai.xi size_t dlmalloc_footprint(void) {
4403*53ee8cc1Swenshuai.xi   return gm->footprint;
4404*53ee8cc1Swenshuai.xi }
4405*53ee8cc1Swenshuai.xi 
dlmalloc_max_footprint(void)4406*53ee8cc1Swenshuai.xi size_t dlmalloc_max_footprint(void) {
4407*53ee8cc1Swenshuai.xi   return gm->max_footprint;
4408*53ee8cc1Swenshuai.xi }
4409*53ee8cc1Swenshuai.xi 
4410*53ee8cc1Swenshuai.xi #if !NO_MALLINFO
dlmallinfo(void)4411*53ee8cc1Swenshuai.xi struct mallinfo dlmallinfo(void) {
4412*53ee8cc1Swenshuai.xi   size_t largest_free;
4413*53ee8cc1Swenshuai.xi   return internal_mallinfo(gm, &largest_free);
4414*53ee8cc1Swenshuai.xi }
dlmallinfo_ex(size_t * plargest_free)4415*53ee8cc1Swenshuai.xi struct mallinfo dlmallinfo_ex(size_t *plargest_free) {
4416*53ee8cc1Swenshuai.xi   return internal_mallinfo(gm, plargest_free);
4417*53ee8cc1Swenshuai.xi }
4418*53ee8cc1Swenshuai.xi #endif /* NO_MALLINFO */
4419*53ee8cc1Swenshuai.xi 
dlmalloc_stats()4420*53ee8cc1Swenshuai.xi void dlmalloc_stats() {
4421*53ee8cc1Swenshuai.xi   internal_malloc_stats(gm);
4422*53ee8cc1Swenshuai.xi }
4423*53ee8cc1Swenshuai.xi 
dlmalloc_usable_size(void * mem)4424*53ee8cc1Swenshuai.xi size_t dlmalloc_usable_size(void* mem) {
4425*53ee8cc1Swenshuai.xi   if (mem != 0) {
4426*53ee8cc1Swenshuai.xi     mchunkptr p = mem2chunk(mem);
4427*53ee8cc1Swenshuai.xi     if (cinuse(p))
4428*53ee8cc1Swenshuai.xi       return chunksize(p) - overhead_for(p);
4429*53ee8cc1Swenshuai.xi   }
4430*53ee8cc1Swenshuai.xi   return 0;
4431*53ee8cc1Swenshuai.xi }
4432*53ee8cc1Swenshuai.xi 
dlmallopt(int param_number,int value)4433*53ee8cc1Swenshuai.xi int dlmallopt(int param_number, int value) {
4434*53ee8cc1Swenshuai.xi   return change_mparam(param_number, value);
4435*53ee8cc1Swenshuai.xi }
4436*53ee8cc1Swenshuai.xi 
4437*53ee8cc1Swenshuai.xi #endif /* !ONLY_MSPACES */
4438*53ee8cc1Swenshuai.xi 
4439*53ee8cc1Swenshuai.xi /* ----------------------------- user mspaces ---------------------------- */
4440*53ee8cc1Swenshuai.xi 
4441*53ee8cc1Swenshuai.xi #if MSPACES
4442*53ee8cc1Swenshuai.xi 
init_user_mstate(char * tbase,size_t tsize)4443*53ee8cc1Swenshuai.xi static mstate init_user_mstate(char* tbase, size_t tsize) {
4444*53ee8cc1Swenshuai.xi   size_t msize = pad_request(sizeof(struct malloc_state));
4445*53ee8cc1Swenshuai.xi   mchunkptr mn;
4446*53ee8cc1Swenshuai.xi   mchunkptr msp = align_as_chunk(tbase);
4447*53ee8cc1Swenshuai.xi   mstate m = (mstate)(chunk2mem(msp));
4448*53ee8cc1Swenshuai.xi   memset(m, 0, msize);
4449*53ee8cc1Swenshuai.xi   INITIAL_LOCK(&m->mutex);
4450*53ee8cc1Swenshuai.xi   msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
4451*53ee8cc1Swenshuai.xi   m->seg.base = m->least_addr = tbase;
4452*53ee8cc1Swenshuai.xi   m->seg.size = m->footprint = m->max_footprint = tsize;
4453*53ee8cc1Swenshuai.xi   m->magic = mparams.magic;
4454*53ee8cc1Swenshuai.xi   m->mflags = mparams.default_mflags;
4455*53ee8cc1Swenshuai.xi   disable_contiguous(m);
4456*53ee8cc1Swenshuai.xi   init_bins(m);
4457*53ee8cc1Swenshuai.xi   mn = next_chunk(mem2chunk(m));
4458*53ee8cc1Swenshuai.xi   init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
4459*53ee8cc1Swenshuai.xi   check_top_chunk(m, m->top);
4460*53ee8cc1Swenshuai.xi   return m;
4461*53ee8cc1Swenshuai.xi }
4462*53ee8cc1Swenshuai.xi 
create_mspace(size_t capacity,int locked)4463*53ee8cc1Swenshuai.xi mspace create_mspace(size_t capacity, int locked) {
4464*53ee8cc1Swenshuai.xi   mstate m = 0;
4465*53ee8cc1Swenshuai.xi   size_t msize = pad_request(sizeof(struct malloc_state));
4466*53ee8cc1Swenshuai.xi   init_mparams(); /* Ensure pagesize etc initialized */
4467*53ee8cc1Swenshuai.xi 
4468*53ee8cc1Swenshuai.xi   if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
4469*53ee8cc1Swenshuai.xi     size_t rs = ((capacity == 0)? mparams.granularity :
4470*53ee8cc1Swenshuai.xi                  (capacity + TOP_FOOT_SIZE + msize));
4471*53ee8cc1Swenshuai.xi     size_t tsize = granularity_align(rs);
4472*53ee8cc1Swenshuai.xi     char* tbase = (char*)(CALL_MMAP(tsize));
4473*53ee8cc1Swenshuai.xi     if (tbase != CMFAIL) {
4474*53ee8cc1Swenshuai.xi       m = init_user_mstate(tbase, tsize);
4475*53ee8cc1Swenshuai.xi       m->seg.sflags = IS_MMAPPED_BIT;
4476*53ee8cc1Swenshuai.xi       set_lock(m, locked);
4477*53ee8cc1Swenshuai.xi     }
4478*53ee8cc1Swenshuai.xi   }
4479*53ee8cc1Swenshuai.xi   return (mspace)m;
4480*53ee8cc1Swenshuai.xi }
4481*53ee8cc1Swenshuai.xi 
create_mspace_with_base(void * base,size_t capacity,int locked)4482*53ee8cc1Swenshuai.xi mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
4483*53ee8cc1Swenshuai.xi   mstate m = 0;
4484*53ee8cc1Swenshuai.xi   size_t msize = pad_request(sizeof(struct malloc_state));
4485*53ee8cc1Swenshuai.xi   init_mparams(); /* Ensure pagesize etc initialized */
4486*53ee8cc1Swenshuai.xi 
4487*53ee8cc1Swenshuai.xi   if (capacity > msize + TOP_FOOT_SIZE &&
4488*53ee8cc1Swenshuai.xi       capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
4489*53ee8cc1Swenshuai.xi     m = init_user_mstate((char*)base, capacity);
4490*53ee8cc1Swenshuai.xi     m->seg.sflags = EXTERN_BIT;
4491*53ee8cc1Swenshuai.xi     set_lock(m, locked);
4492*53ee8cc1Swenshuai.xi   }
4493*53ee8cc1Swenshuai.xi   return (mspace)m;
4494*53ee8cc1Swenshuai.xi }
4495*53ee8cc1Swenshuai.xi 
destroy_mspace(mspace msp)4496*53ee8cc1Swenshuai.xi size_t destroy_mspace(mspace msp) {
4497*53ee8cc1Swenshuai.xi   size_t freed = 0;
4498*53ee8cc1Swenshuai.xi   mstate ms = (mstate)msp;
4499*53ee8cc1Swenshuai.xi   if (ok_magic(ms)) {
4500*53ee8cc1Swenshuai.xi     msegmentptr sp = &ms->seg;
4501*53ee8cc1Swenshuai.xi     while (sp != 0) {
4502*53ee8cc1Swenshuai.xi       char* base = sp->base;
4503*53ee8cc1Swenshuai.xi       size_t size = sp->size;
4504*53ee8cc1Swenshuai.xi       flag_t flag = sp->sflags;
4505*53ee8cc1Swenshuai.xi       sp = sp->next;
4506*53ee8cc1Swenshuai.xi       if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
4507*53ee8cc1Swenshuai.xi           CALL_MUNMAP(base, size) == 0)
4508*53ee8cc1Swenshuai.xi         freed += size;
4509*53ee8cc1Swenshuai.xi     }
4510*53ee8cc1Swenshuai.xi   }
4511*53ee8cc1Swenshuai.xi   else {
4512*53ee8cc1Swenshuai.xi     USAGE_ERROR_ACTION(ms,ms);
4513*53ee8cc1Swenshuai.xi   }
4514*53ee8cc1Swenshuai.xi   return freed;
4515*53ee8cc1Swenshuai.xi }
4516*53ee8cc1Swenshuai.xi 
4517*53ee8cc1Swenshuai.xi /*
4518*53ee8cc1Swenshuai.xi   mspace versions of routines are near-clones of the global
4519*53ee8cc1Swenshuai.xi   versions. This is not so nice but better than the alternatives.
4520*53ee8cc1Swenshuai.xi */
4521*53ee8cc1Swenshuai.xi 
4522*53ee8cc1Swenshuai.xi 
mspace_malloc(mspace msp,size_t bytes)4523*53ee8cc1Swenshuai.xi void* mspace_malloc(mspace msp, size_t bytes) {
4524*53ee8cc1Swenshuai.xi   mstate ms = (mstate)msp;
4525*53ee8cc1Swenshuai.xi   if (!ok_magic(ms)) {
4526*53ee8cc1Swenshuai.xi     USAGE_ERROR_ACTION(ms,ms);
4527*53ee8cc1Swenshuai.xi     return 0;
4528*53ee8cc1Swenshuai.xi   }
4529*53ee8cc1Swenshuai.xi   if (!PREACTION(ms)) {
4530*53ee8cc1Swenshuai.xi     void* mem;
4531*53ee8cc1Swenshuai.xi     size_t nb;
4532*53ee8cc1Swenshuai.xi     if (bytes <= MAX_SMALL_REQUEST) {
4533*53ee8cc1Swenshuai.xi       bindex_t idx;
4534*53ee8cc1Swenshuai.xi       binmap_t smallbits;
4535*53ee8cc1Swenshuai.xi       nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4536*53ee8cc1Swenshuai.xi       idx = small_index(nb);
4537*53ee8cc1Swenshuai.xi       smallbits = ms->smallmap >> idx;
4538*53ee8cc1Swenshuai.xi 
4539*53ee8cc1Swenshuai.xi       if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4540*53ee8cc1Swenshuai.xi         mchunkptr b, p;
4541*53ee8cc1Swenshuai.xi         idx += ~smallbits & 1;       /* Uses next bin if idx empty */
4542*53ee8cc1Swenshuai.xi         b = smallbin_at(ms, idx);
4543*53ee8cc1Swenshuai.xi         p = b->fd;
4544*53ee8cc1Swenshuai.xi         assert(chunksize(p) == small_index2size(idx));
4545*53ee8cc1Swenshuai.xi         unlink_first_small_chunk(ms, b, p, idx);
4546*53ee8cc1Swenshuai.xi         set_inuse_and_pinuse(ms, p, small_index2size(idx));
4547*53ee8cc1Swenshuai.xi         mem = chunk2mem(p);
4548*53ee8cc1Swenshuai.xi         check_malloced_chunk(ms, mem, nb);
4549*53ee8cc1Swenshuai.xi         goto postaction;
4550*53ee8cc1Swenshuai.xi       }
4551*53ee8cc1Swenshuai.xi 
4552*53ee8cc1Swenshuai.xi       else if (nb > ms->dvsize) {
4553*53ee8cc1Swenshuai.xi         if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4554*53ee8cc1Swenshuai.xi           mchunkptr b, p, r;
4555*53ee8cc1Swenshuai.xi           size_t rsize;
4556*53ee8cc1Swenshuai.xi           bindex_t i;
4557*53ee8cc1Swenshuai.xi           binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4558*53ee8cc1Swenshuai.xi           binmap_t leastbit = least_bit(leftbits);
4559*53ee8cc1Swenshuai.xi           compute_bit2idx(leastbit, i);
4560*53ee8cc1Swenshuai.xi           b = smallbin_at(ms, i);
4561*53ee8cc1Swenshuai.xi           p = b->fd;
4562*53ee8cc1Swenshuai.xi           assert(chunksize(p) == small_index2size(i));
4563*53ee8cc1Swenshuai.xi           unlink_first_small_chunk(ms, b, p, i);
4564*53ee8cc1Swenshuai.xi           rsize = small_index2size(i) - nb;
4565*53ee8cc1Swenshuai.xi           /* Fit here cannot be remainderless if 4byte sizes */
4566*53ee8cc1Swenshuai.xi           if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4567*53ee8cc1Swenshuai.xi             set_inuse_and_pinuse(ms, p, small_index2size(i));
4568*53ee8cc1Swenshuai.xi           else {
4569*53ee8cc1Swenshuai.xi             set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4570*53ee8cc1Swenshuai.xi             r = chunk_plus_offset(p, nb);
4571*53ee8cc1Swenshuai.xi             set_size_and_pinuse_of_free_chunk(r, rsize);
4572*53ee8cc1Swenshuai.xi             replace_dv(ms, r, rsize);
4573*53ee8cc1Swenshuai.xi           }
4574*53ee8cc1Swenshuai.xi           mem = chunk2mem(p);
4575*53ee8cc1Swenshuai.xi           check_malloced_chunk(ms, mem, nb);
4576*53ee8cc1Swenshuai.xi           goto postaction;
4577*53ee8cc1Swenshuai.xi         }
4578*53ee8cc1Swenshuai.xi 
4579*53ee8cc1Swenshuai.xi         else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
4580*53ee8cc1Swenshuai.xi           check_malloced_chunk(ms, mem, nb);
4581*53ee8cc1Swenshuai.xi           goto postaction;
4582*53ee8cc1Swenshuai.xi         }
4583*53ee8cc1Swenshuai.xi       }
4584*53ee8cc1Swenshuai.xi     }
4585*53ee8cc1Swenshuai.xi     else if (bytes >= MAX_REQUEST)
4586*53ee8cc1Swenshuai.xi       nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4587*53ee8cc1Swenshuai.xi     else {
4588*53ee8cc1Swenshuai.xi       nb = pad_request(bytes);
4589*53ee8cc1Swenshuai.xi       if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
4590*53ee8cc1Swenshuai.xi         check_malloced_chunk(ms, mem, nb);
4591*53ee8cc1Swenshuai.xi         goto postaction;
4592*53ee8cc1Swenshuai.xi       }
4593*53ee8cc1Swenshuai.xi     }
4594*53ee8cc1Swenshuai.xi 
4595*53ee8cc1Swenshuai.xi     if (nb <= ms->dvsize) {
4596*53ee8cc1Swenshuai.xi       size_t rsize = ms->dvsize - nb;
4597*53ee8cc1Swenshuai.xi       mchunkptr p = ms->dv;
4598*53ee8cc1Swenshuai.xi       if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4599*53ee8cc1Swenshuai.xi         mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
4600*53ee8cc1Swenshuai.xi         ms->dvsize = rsize;
4601*53ee8cc1Swenshuai.xi         set_size_and_pinuse_of_free_chunk(r, rsize);
4602*53ee8cc1Swenshuai.xi         set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4603*53ee8cc1Swenshuai.xi       }
4604*53ee8cc1Swenshuai.xi       else { /* exhaust dv */
4605*53ee8cc1Swenshuai.xi         size_t dvs = ms->dvsize;
4606*53ee8cc1Swenshuai.xi         ms->dvsize = 0;
4607*53ee8cc1Swenshuai.xi         ms->dv = 0;
4608*53ee8cc1Swenshuai.xi         set_inuse_and_pinuse(ms, p, dvs);
4609*53ee8cc1Swenshuai.xi       }
4610*53ee8cc1Swenshuai.xi       mem = chunk2mem(p);
4611*53ee8cc1Swenshuai.xi       check_malloced_chunk(ms, mem, nb);
4612*53ee8cc1Swenshuai.xi       goto postaction;
4613*53ee8cc1Swenshuai.xi     }
4614*53ee8cc1Swenshuai.xi 
4615*53ee8cc1Swenshuai.xi     else if (nb < ms->topsize) { /* Split top */
4616*53ee8cc1Swenshuai.xi       size_t rsize = ms->topsize -= nb;
4617*53ee8cc1Swenshuai.xi       mchunkptr p = ms->top;
4618*53ee8cc1Swenshuai.xi       mchunkptr r = ms->top = chunk_plus_offset(p, nb);
4619*53ee8cc1Swenshuai.xi       r->head = rsize | PINUSE_BIT;
4620*53ee8cc1Swenshuai.xi       set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4621*53ee8cc1Swenshuai.xi       mem = chunk2mem(p);
4622*53ee8cc1Swenshuai.xi       check_top_chunk(ms, ms->top);
4623*53ee8cc1Swenshuai.xi       check_malloced_chunk(ms, mem, nb);
4624*53ee8cc1Swenshuai.xi       goto postaction;
4625*53ee8cc1Swenshuai.xi     }
4626*53ee8cc1Swenshuai.xi 
4627*53ee8cc1Swenshuai.xi     mem = sys_alloc(ms, nb);
4628*53ee8cc1Swenshuai.xi 
4629*53ee8cc1Swenshuai.xi   postaction:
4630*53ee8cc1Swenshuai.xi     POSTACTION(ms);
4631*53ee8cc1Swenshuai.xi     return mem;
4632*53ee8cc1Swenshuai.xi   }
4633*53ee8cc1Swenshuai.xi 
4634*53ee8cc1Swenshuai.xi   return 0;
4635*53ee8cc1Swenshuai.xi }
4636*53ee8cc1Swenshuai.xi 
mspace_free(mspace msp,void * mem)4637*53ee8cc1Swenshuai.xi void mspace_free(mspace msp, void* mem) {
4638*53ee8cc1Swenshuai.xi   if (mem != 0) {
4639*53ee8cc1Swenshuai.xi     mchunkptr p  = mem2chunk(mem);
4640*53ee8cc1Swenshuai.xi #if FOOTERS
4641*53ee8cc1Swenshuai.xi     mstate fm = get_mstate_for(p);
4642*53ee8cc1Swenshuai.xi #else /* FOOTERS */
4643*53ee8cc1Swenshuai.xi     mstate fm = (mstate)msp;
4644*53ee8cc1Swenshuai.xi #endif /* FOOTERS */
4645*53ee8cc1Swenshuai.xi     if (!ok_magic(fm)) {
4646*53ee8cc1Swenshuai.xi       USAGE_ERROR_ACTION(fm, p);
4647*53ee8cc1Swenshuai.xi       return;
4648*53ee8cc1Swenshuai.xi     }
4649*53ee8cc1Swenshuai.xi     if (!PREACTION(fm)) {
4650*53ee8cc1Swenshuai.xi       check_inuse_chunk(fm, p);
4651*53ee8cc1Swenshuai.xi       if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4652*53ee8cc1Swenshuai.xi         size_t psize = chunksize(p);
4653*53ee8cc1Swenshuai.xi         mchunkptr next = chunk_plus_offset(p, psize);
4654*53ee8cc1Swenshuai.xi         if (!pinuse(p)) {
4655*53ee8cc1Swenshuai.xi           size_t prevsize = p->prev_foot;
4656*53ee8cc1Swenshuai.xi           if ((prevsize & IS_MMAPPED_BIT) != 0) {
4657*53ee8cc1Swenshuai.xi             prevsize &= ~IS_MMAPPED_BIT;
4658*53ee8cc1Swenshuai.xi             psize += prevsize + MMAP_FOOT_PAD;
4659*53ee8cc1Swenshuai.xi             if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4660*53ee8cc1Swenshuai.xi               fm->footprint -= psize;
4661*53ee8cc1Swenshuai.xi             goto postaction;
4662*53ee8cc1Swenshuai.xi           }
4663*53ee8cc1Swenshuai.xi           else {
4664*53ee8cc1Swenshuai.xi             mchunkptr prev = chunk_minus_offset(p, prevsize);
4665*53ee8cc1Swenshuai.xi             psize += prevsize;
4666*53ee8cc1Swenshuai.xi             p = prev;
4667*53ee8cc1Swenshuai.xi             if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4668*53ee8cc1Swenshuai.xi               if (p != fm->dv) {
4669*53ee8cc1Swenshuai.xi                 unlink_chunk(fm, p, prevsize);
4670*53ee8cc1Swenshuai.xi               }
4671*53ee8cc1Swenshuai.xi               else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4672*53ee8cc1Swenshuai.xi                 fm->dvsize = psize;
4673*53ee8cc1Swenshuai.xi                 set_free_with_pinuse(p, psize, next);
4674*53ee8cc1Swenshuai.xi                 goto postaction;
4675*53ee8cc1Swenshuai.xi               }
4676*53ee8cc1Swenshuai.xi             }
4677*53ee8cc1Swenshuai.xi             else
4678*53ee8cc1Swenshuai.xi               goto erroraction;
4679*53ee8cc1Swenshuai.xi           }
4680*53ee8cc1Swenshuai.xi         }
4681*53ee8cc1Swenshuai.xi 
4682*53ee8cc1Swenshuai.xi         if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4683*53ee8cc1Swenshuai.xi           if (!cinuse(next)) {  /* consolidate forward */
4684*53ee8cc1Swenshuai.xi             if (next == fm->top) {
4685*53ee8cc1Swenshuai.xi               size_t tsize = fm->topsize += psize;
4686*53ee8cc1Swenshuai.xi               fm->top = p;
4687*53ee8cc1Swenshuai.xi               p->head = tsize | PINUSE_BIT;
4688*53ee8cc1Swenshuai.xi               if (p == fm->dv) {
4689*53ee8cc1Swenshuai.xi                 fm->dv = 0;
4690*53ee8cc1Swenshuai.xi                 fm->dvsize = 0;
4691*53ee8cc1Swenshuai.xi               }
4692*53ee8cc1Swenshuai.xi               if (should_trim(fm, tsize))
4693*53ee8cc1Swenshuai.xi                 sys_trim(fm, 0);
4694*53ee8cc1Swenshuai.xi               goto postaction;
4695*53ee8cc1Swenshuai.xi             }
4696*53ee8cc1Swenshuai.xi             else if (next == fm->dv) {
4697*53ee8cc1Swenshuai.xi               size_t dsize = fm->dvsize += psize;
4698*53ee8cc1Swenshuai.xi               fm->dv = p;
4699*53ee8cc1Swenshuai.xi               set_size_and_pinuse_of_free_chunk(p, dsize);
4700*53ee8cc1Swenshuai.xi               goto postaction;
4701*53ee8cc1Swenshuai.xi             }
4702*53ee8cc1Swenshuai.xi             else {
4703*53ee8cc1Swenshuai.xi               size_t nsize = chunksize(next);
4704*53ee8cc1Swenshuai.xi               psize += nsize;
4705*53ee8cc1Swenshuai.xi               unlink_chunk(fm, next, nsize);
4706*53ee8cc1Swenshuai.xi               set_size_and_pinuse_of_free_chunk(p, psize);
4707*53ee8cc1Swenshuai.xi               if (p == fm->dv) {
4708*53ee8cc1Swenshuai.xi                 fm->dvsize = psize;
4709*53ee8cc1Swenshuai.xi                 goto postaction;
4710*53ee8cc1Swenshuai.xi               }
4711*53ee8cc1Swenshuai.xi             }
4712*53ee8cc1Swenshuai.xi           }
4713*53ee8cc1Swenshuai.xi           else
4714*53ee8cc1Swenshuai.xi             set_free_with_pinuse(p, psize, next);
4715*53ee8cc1Swenshuai.xi           insert_chunk(fm, p, psize);
4716*53ee8cc1Swenshuai.xi           check_free_chunk(fm, p);
4717*53ee8cc1Swenshuai.xi           goto postaction;
4718*53ee8cc1Swenshuai.xi         }
4719*53ee8cc1Swenshuai.xi       }
4720*53ee8cc1Swenshuai.xi     erroraction:
4721*53ee8cc1Swenshuai.xi       USAGE_ERROR_ACTION(fm, p);
4722*53ee8cc1Swenshuai.xi     postaction:
4723*53ee8cc1Swenshuai.xi       POSTACTION(fm);
4724*53ee8cc1Swenshuai.xi     }
4725*53ee8cc1Swenshuai.xi   }
4726*53ee8cc1Swenshuai.xi }
4727*53ee8cc1Swenshuai.xi 
mspace_calloc(mspace msp,size_t n_elements,size_t elem_size)4728*53ee8cc1Swenshuai.xi void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
4729*53ee8cc1Swenshuai.xi   void* mem;
4730*53ee8cc1Swenshuai.xi   size_t req = 0;
4731*53ee8cc1Swenshuai.xi   mstate ms = (mstate)msp;
4732*53ee8cc1Swenshuai.xi   if (!ok_magic(ms)) {
4733*53ee8cc1Swenshuai.xi     USAGE_ERROR_ACTION(ms,ms);
4734*53ee8cc1Swenshuai.xi     return 0;
4735*53ee8cc1Swenshuai.xi   }
4736*53ee8cc1Swenshuai.xi   if (n_elements != 0) {
4737*53ee8cc1Swenshuai.xi     req = n_elements * elem_size;
4738*53ee8cc1Swenshuai.xi     if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4739*53ee8cc1Swenshuai.xi         (req / n_elements != elem_size))
4740*53ee8cc1Swenshuai.xi       req = MAX_SIZE_T; /* force downstream failure on overflow */
4741*53ee8cc1Swenshuai.xi   }
4742*53ee8cc1Swenshuai.xi   mem = internal_malloc(ms, req);
4743*53ee8cc1Swenshuai.xi   if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4744*53ee8cc1Swenshuai.xi     memset(mem, 0, req);
4745*53ee8cc1Swenshuai.xi   return mem;
4746*53ee8cc1Swenshuai.xi }
4747*53ee8cc1Swenshuai.xi 
mspace_realloc(mspace msp,void * oldmem,size_t bytes)4748*53ee8cc1Swenshuai.xi void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
4749*53ee8cc1Swenshuai.xi   if (oldmem == 0)
4750*53ee8cc1Swenshuai.xi     return mspace_malloc(msp, bytes);
4751*53ee8cc1Swenshuai.xi #ifdef REALLOC_ZERO_BYTES_FREES
4752*53ee8cc1Swenshuai.xi   if (bytes == 0) {
4753*53ee8cc1Swenshuai.xi     mspace_free(msp, oldmem);
4754*53ee8cc1Swenshuai.xi     return 0;
4755*53ee8cc1Swenshuai.xi   }
4756*53ee8cc1Swenshuai.xi #endif /* REALLOC_ZERO_BYTES_FREES */
4757*53ee8cc1Swenshuai.xi   else {
4758*53ee8cc1Swenshuai.xi #if FOOTERS
4759*53ee8cc1Swenshuai.xi     mchunkptr p  = mem2chunk(oldmem);
4760*53ee8cc1Swenshuai.xi     mstate ms = get_mstate_for(p);
4761*53ee8cc1Swenshuai.xi #else /* FOOTERS */
4762*53ee8cc1Swenshuai.xi     mstate ms = (mstate)msp;
4763*53ee8cc1Swenshuai.xi #endif /* FOOTERS */
4764*53ee8cc1Swenshuai.xi     if (!ok_magic(ms)) {
4765*53ee8cc1Swenshuai.xi       USAGE_ERROR_ACTION(ms,ms);
4766*53ee8cc1Swenshuai.xi       return 0;
4767*53ee8cc1Swenshuai.xi     }
4768*53ee8cc1Swenshuai.xi     return internal_realloc(ms, oldmem, bytes);
4769*53ee8cc1Swenshuai.xi   }
4770*53ee8cc1Swenshuai.xi }
4771*53ee8cc1Swenshuai.xi 
mspace_memalign(mspace msp,size_t alignment,size_t bytes)4772*53ee8cc1Swenshuai.xi void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
4773*53ee8cc1Swenshuai.xi   mstate ms = (mstate)msp;
4774*53ee8cc1Swenshuai.xi   if (!ok_magic(ms)) {
4775*53ee8cc1Swenshuai.xi     USAGE_ERROR_ACTION(ms,ms);
4776*53ee8cc1Swenshuai.xi     return 0;
4777*53ee8cc1Swenshuai.xi   }
4778*53ee8cc1Swenshuai.xi   return internal_memalign(ms, alignment, bytes);
4779*53ee8cc1Swenshuai.xi }
4780*53ee8cc1Swenshuai.xi 
mspace_independent_calloc(mspace msp,size_t n_elements,size_t elem_size,void * chunks[])4781*53ee8cc1Swenshuai.xi void** mspace_independent_calloc(mspace msp, size_t n_elements,
4782*53ee8cc1Swenshuai.xi                                  size_t elem_size, void* chunks[]) {
4783*53ee8cc1Swenshuai.xi   size_t sz = elem_size; /* serves as 1-element array */
4784*53ee8cc1Swenshuai.xi   mstate ms = (mstate)msp;
4785*53ee8cc1Swenshuai.xi   if (!ok_magic(ms)) {
4786*53ee8cc1Swenshuai.xi     USAGE_ERROR_ACTION(ms,ms);
4787*53ee8cc1Swenshuai.xi     return 0;
4788*53ee8cc1Swenshuai.xi   }
4789*53ee8cc1Swenshuai.xi   return ialloc(ms, n_elements, &sz, 3, chunks);
4790*53ee8cc1Swenshuai.xi }
4791*53ee8cc1Swenshuai.xi 
mspace_independent_comalloc(mspace msp,size_t n_elements,size_t sizes[],void * chunks[])4792*53ee8cc1Swenshuai.xi void** mspace_independent_comalloc(mspace msp, size_t n_elements,
4793*53ee8cc1Swenshuai.xi                                    size_t sizes[], void* chunks[]) {
4794*53ee8cc1Swenshuai.xi   mstate ms = (mstate)msp;
4795*53ee8cc1Swenshuai.xi   if (!ok_magic(ms)) {
4796*53ee8cc1Swenshuai.xi     USAGE_ERROR_ACTION(ms,ms);
4797*53ee8cc1Swenshuai.xi     return 0;
4798*53ee8cc1Swenshuai.xi   }
4799*53ee8cc1Swenshuai.xi   return ialloc(ms, n_elements, sizes, 0, chunks);
4800*53ee8cc1Swenshuai.xi }
4801*53ee8cc1Swenshuai.xi 
mspace_trim(mspace msp,size_t pad)4802*53ee8cc1Swenshuai.xi int mspace_trim(mspace msp, size_t pad) {
4803*53ee8cc1Swenshuai.xi   int result = 0;
4804*53ee8cc1Swenshuai.xi   mstate ms = (mstate)msp;
4805*53ee8cc1Swenshuai.xi   if (ok_magic(ms)) {
4806*53ee8cc1Swenshuai.xi     if (!PREACTION(ms)) {
4807*53ee8cc1Swenshuai.xi       result = sys_trim(ms, pad);
4808*53ee8cc1Swenshuai.xi       POSTACTION(ms);
4809*53ee8cc1Swenshuai.xi     }
4810*53ee8cc1Swenshuai.xi   }
4811*53ee8cc1Swenshuai.xi   else {
4812*53ee8cc1Swenshuai.xi     USAGE_ERROR_ACTION(ms,ms);
4813*53ee8cc1Swenshuai.xi   }
4814*53ee8cc1Swenshuai.xi   return result;
4815*53ee8cc1Swenshuai.xi }
4816*53ee8cc1Swenshuai.xi 
mspace_malloc_stats(mspace msp)4817*53ee8cc1Swenshuai.xi void mspace_malloc_stats(mspace msp) {
4818*53ee8cc1Swenshuai.xi   mstate ms = (mstate)msp;
4819*53ee8cc1Swenshuai.xi   if (ok_magic(ms)) {
4820*53ee8cc1Swenshuai.xi     internal_malloc_stats(ms);
4821*53ee8cc1Swenshuai.xi   }
4822*53ee8cc1Swenshuai.xi   else {
4823*53ee8cc1Swenshuai.xi     USAGE_ERROR_ACTION(ms,ms);
4824*53ee8cc1Swenshuai.xi   }
4825*53ee8cc1Swenshuai.xi }
4826*53ee8cc1Swenshuai.xi 
mspace_footprint(mspace msp)4827*53ee8cc1Swenshuai.xi size_t mspace_footprint(mspace msp) {
4828*53ee8cc1Swenshuai.xi   size_t result;
4829*53ee8cc1Swenshuai.xi   mstate ms = (mstate)msp;
4830*53ee8cc1Swenshuai.xi   if (ok_magic(ms)) {
4831*53ee8cc1Swenshuai.xi     result = ms->footprint;
4832*53ee8cc1Swenshuai.xi   }
4833*53ee8cc1Swenshuai.xi   USAGE_ERROR_ACTION(ms,ms);
4834*53ee8cc1Swenshuai.xi   return result;
4835*53ee8cc1Swenshuai.xi }
4836*53ee8cc1Swenshuai.xi 
4837*53ee8cc1Swenshuai.xi 
mspace_max_footprint(mspace msp)4838*53ee8cc1Swenshuai.xi size_t mspace_max_footprint(mspace msp) {
4839*53ee8cc1Swenshuai.xi   size_t result;
4840*53ee8cc1Swenshuai.xi   mstate ms = (mstate)msp;
4841*53ee8cc1Swenshuai.xi   if (ok_magic(ms)) {
4842*53ee8cc1Swenshuai.xi     result = ms->max_footprint;
4843*53ee8cc1Swenshuai.xi   }
4844*53ee8cc1Swenshuai.xi   USAGE_ERROR_ACTION(ms,ms);
4845*53ee8cc1Swenshuai.xi   return result;
4846*53ee8cc1Swenshuai.xi }
4847*53ee8cc1Swenshuai.xi 
4848*53ee8cc1Swenshuai.xi 
4849*53ee8cc1Swenshuai.xi #if !NO_MALLINFO
mspace_mallinfo(mspace msp)4850*53ee8cc1Swenshuai.xi struct mallinfo mspace_mallinfo(mspace msp) {
4851*53ee8cc1Swenshuai.xi   mstate ms = (mstate)msp;
4852*53ee8cc1Swenshuai.xi   if (!ok_magic(ms)) {
4853*53ee8cc1Swenshuai.xi     USAGE_ERROR_ACTION(ms,ms);
4854*53ee8cc1Swenshuai.xi   }
4855*53ee8cc1Swenshuai.xi   return internal_mallinfo(ms);
4856*53ee8cc1Swenshuai.xi }
4857*53ee8cc1Swenshuai.xi #endif /* NO_MALLINFO */
4858*53ee8cc1Swenshuai.xi 
mspace_mallopt(int param_number,int value)4859*53ee8cc1Swenshuai.xi int mspace_mallopt(int param_number, int value) {
4860*53ee8cc1Swenshuai.xi   return change_mparam(param_number, value);
4861*53ee8cc1Swenshuai.xi }
4862*53ee8cc1Swenshuai.xi 
4863*53ee8cc1Swenshuai.xi #endif /* MSPACES */
4864*53ee8cc1Swenshuai.xi 
4865*53ee8cc1Swenshuai.xi /* -------------------- Alternative MORECORE functions ------------------- */
4866*53ee8cc1Swenshuai.xi 
4867*53ee8cc1Swenshuai.xi /*
4868*53ee8cc1Swenshuai.xi   Guidelines for creating a custom version of MORECORE:
4869*53ee8cc1Swenshuai.xi 
4870*53ee8cc1Swenshuai.xi   * For best performance, MORECORE should allocate in multiples of pagesize.
4871*53ee8cc1Swenshuai.xi   * MORECORE may allocate more memory than requested. (Or even less,
4872*53ee8cc1Swenshuai.xi       but this will usually result in a malloc failure.)
4873*53ee8cc1Swenshuai.xi   * MORECORE must not allocate memory when given argument zero, but
4874*53ee8cc1Swenshuai.xi       instead return one past the end address of memory from previous
4875*53ee8cc1Swenshuai.xi       nonzero call.
4876*53ee8cc1Swenshuai.xi   * For best performance, consecutive calls to MORECORE with positive
4877*53ee8cc1Swenshuai.xi       arguments should return increasing addresses, indicating that
4878*53ee8cc1Swenshuai.xi       space has been contiguously extended.
4879*53ee8cc1Swenshuai.xi   * Even though consecutive calls to MORECORE need not return contiguous
4880*53ee8cc1Swenshuai.xi       addresses, it must be OK for malloc'ed chunks to span multiple
4881*53ee8cc1Swenshuai.xi       regions in those cases where they do happen to be contiguous.
4882*53ee8cc1Swenshuai.xi   * MORECORE need not handle negative arguments -- it may instead
4883*53ee8cc1Swenshuai.xi       just return MFAIL when given negative arguments.
4884*53ee8cc1Swenshuai.xi       Negative arguments are always multiples of pagesize. MORECORE
4885*53ee8cc1Swenshuai.xi       must not misinterpret negative args as large positive unsigned
4886*53ee8cc1Swenshuai.xi       args. You can suppress all such calls from even occurring by defining
4887*53ee8cc1Swenshuai.xi       MORECORE_CANNOT_TRIM,
4888*53ee8cc1Swenshuai.xi 
4889*53ee8cc1Swenshuai.xi   As an example alternative MORECORE, here is a custom allocator
4890*53ee8cc1Swenshuai.xi   kindly contributed for pre-OSX macOS.  It uses virtually but not
4891*53ee8cc1Swenshuai.xi   necessarily physically contiguous non-paged memory (locked in,
4892*53ee8cc1Swenshuai.xi   present and won't get swapped out).  You can use it by uncommenting
4893*53ee8cc1Swenshuai.xi   this section, adding some #includes, and setting up the appropriate
4894*53ee8cc1Swenshuai.xi   defines above:
4895*53ee8cc1Swenshuai.xi 
4896*53ee8cc1Swenshuai.xi       #define MORECORE osMoreCore
4897*53ee8cc1Swenshuai.xi 
4898*53ee8cc1Swenshuai.xi   There is also a shutdown routine that should somehow be called for
4899*53ee8cc1Swenshuai.xi   cleanup upon program exit.
4900*53ee8cc1Swenshuai.xi 
4901*53ee8cc1Swenshuai.xi   #define MAX_POOL_ENTRIES 100
4902*53ee8cc1Swenshuai.xi   #define MINIMUM_MORECORE_SIZE  (64 * 1024U)
4903*53ee8cc1Swenshuai.xi   static int next_os_pool;
4904*53ee8cc1Swenshuai.xi   void *our_os_pools[MAX_POOL_ENTRIES];
4905*53ee8cc1Swenshuai.xi 
4906*53ee8cc1Swenshuai.xi   void *osMoreCore(int size)
4907*53ee8cc1Swenshuai.xi   {
4908*53ee8cc1Swenshuai.xi     void *ptr = 0;
4909*53ee8cc1Swenshuai.xi     static void *sbrk_top = 0;
4910*53ee8cc1Swenshuai.xi 
4911*53ee8cc1Swenshuai.xi     if (size > 0)
4912*53ee8cc1Swenshuai.xi     {
4913*53ee8cc1Swenshuai.xi       if (size < MINIMUM_MORECORE_SIZE)
4914*53ee8cc1Swenshuai.xi          size = MINIMUM_MORECORE_SIZE;
4915*53ee8cc1Swenshuai.xi       if (CurrentExecutionLevel() == kTaskLevel)
4916*53ee8cc1Swenshuai.xi          ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
4917*53ee8cc1Swenshuai.xi       if (ptr == 0)
4918*53ee8cc1Swenshuai.xi       {
4919*53ee8cc1Swenshuai.xi         return (void *) MFAIL;
4920*53ee8cc1Swenshuai.xi       }
4921*53ee8cc1Swenshuai.xi       // save ptrs so they can be freed during cleanup
4922*53ee8cc1Swenshuai.xi       our_os_pools[next_os_pool] = ptr;
4923*53ee8cc1Swenshuai.xi       next_os_pool++;
4924*53ee8cc1Swenshuai.xi       ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
4925*53ee8cc1Swenshuai.xi       sbrk_top = (char *) ptr + size;
4926*53ee8cc1Swenshuai.xi       return ptr;
4927*53ee8cc1Swenshuai.xi     }
4928*53ee8cc1Swenshuai.xi     else if (size < 0)
4929*53ee8cc1Swenshuai.xi     {
4930*53ee8cc1Swenshuai.xi       // we don't currently support shrink behavior
4931*53ee8cc1Swenshuai.xi       return (void *) MFAIL;
4932*53ee8cc1Swenshuai.xi     }
4933*53ee8cc1Swenshuai.xi     else
4934*53ee8cc1Swenshuai.xi     {
4935*53ee8cc1Swenshuai.xi       return sbrk_top;
4936*53ee8cc1Swenshuai.xi     }
4937*53ee8cc1Swenshuai.xi   }
4938*53ee8cc1Swenshuai.xi 
4939*53ee8cc1Swenshuai.xi   // cleanup any allocated memory pools
4940*53ee8cc1Swenshuai.xi   // called as last thing before shutting down driver
4941*53ee8cc1Swenshuai.xi 
4942*53ee8cc1Swenshuai.xi   void osCleanupMem(void)
4943*53ee8cc1Swenshuai.xi   {
4944*53ee8cc1Swenshuai.xi     void **ptr;
4945*53ee8cc1Swenshuai.xi 
4946*53ee8cc1Swenshuai.xi     for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
4947*53ee8cc1Swenshuai.xi       if (*ptr)
4948*53ee8cc1Swenshuai.xi       {
4949*53ee8cc1Swenshuai.xi          PoolDeallocate(*ptr);
4950*53ee8cc1Swenshuai.xi          *ptr = 0;
4951*53ee8cc1Swenshuai.xi       }
4952*53ee8cc1Swenshuai.xi   }
4953*53ee8cc1Swenshuai.xi 
4954*53ee8cc1Swenshuai.xi */
4955*53ee8cc1Swenshuai.xi 
4956*53ee8cc1Swenshuai.xi 
4957*53ee8cc1Swenshuai.xi /* -----------------------------------------------------------------------
4958*53ee8cc1Swenshuai.xi History:
4959*53ee8cc1Swenshuai.xi     V2.8.3 Thu Sep 22 11:16:32 2005  Doug Lea  (dl at gee)
4960*53ee8cc1Swenshuai.xi       * Add max_footprint functions
4961*53ee8cc1Swenshuai.xi       * Ensure all appropriate literals are size_t
4962*53ee8cc1Swenshuai.xi       * Fix conditional compilation problem for some #define settings
4963*53ee8cc1Swenshuai.xi       * Avoid concatenating segments with the one provided
4964*53ee8cc1Swenshuai.xi         in create_mspace_with_base
4965*53ee8cc1Swenshuai.xi       * Rename some variables to avoid compiler shadowing warnings
4966*53ee8cc1Swenshuai.xi       * Use explicit lock initialization.
4967*53ee8cc1Swenshuai.xi       * Better handling of sbrk interference.
4968*53ee8cc1Swenshuai.xi       * Simplify and fix segment insertion, trimming and mspace_destroy
4969*53ee8cc1Swenshuai.xi       * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
4970*53ee8cc1Swenshuai.xi       * Thanks especially to Dennis Flanagan for help on these.
4971*53ee8cc1Swenshuai.xi 
4972*53ee8cc1Swenshuai.xi     V2.8.2 Sun Jun 12 16:01:10 2005  Doug Lea  (dl at gee)
4973*53ee8cc1Swenshuai.xi       * Fix memalign brace error.
4974*53ee8cc1Swenshuai.xi 
4975*53ee8cc1Swenshuai.xi     V2.8.1 Wed Jun  8 16:11:46 2005  Doug Lea  (dl at gee)
4976*53ee8cc1Swenshuai.xi       * Fix improper #endif nesting in C++
4977*53ee8cc1Swenshuai.xi       * Add explicit casts needed for C++
4978*53ee8cc1Swenshuai.xi 
4979*53ee8cc1Swenshuai.xi     V2.8.0 Mon May 30 14:09:02 2005  Doug Lea  (dl at gee)
4980*53ee8cc1Swenshuai.xi       * Use trees for large bins
4981*53ee8cc1Swenshuai.xi       * Support mspaces
4982*53ee8cc1Swenshuai.xi       * Use segments to unify sbrk-based and mmap-based system allocation,
4983*53ee8cc1Swenshuai.xi         removing need for emulation on most platforms without sbrk.
4984*53ee8cc1Swenshuai.xi       * Default safety checks
4985*53ee8cc1Swenshuai.xi       * Optional footer checks. Thanks to William Robertson for the idea.
4986*53ee8cc1Swenshuai.xi       * Internal code refactoring
4987*53ee8cc1Swenshuai.xi       * Incorporate suggestions and platform-specific changes.
4988*53ee8cc1Swenshuai.xi         Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
4989*53ee8cc1Swenshuai.xi         Aaron Bachmann,  Emery Berger, and others.
4990*53ee8cc1Swenshuai.xi       * Speed up non-fastbin processing enough to remove fastbins.
4991*53ee8cc1Swenshuai.xi       * Remove useless cfree() to avoid conflicts with other apps.
4992*53ee8cc1Swenshuai.xi       * Remove internal memcpy, memset. Compilers handle builtins better.
4993*53ee8cc1Swenshuai.xi       * Remove some options that no one ever used and rename others.
4994*53ee8cc1Swenshuai.xi 
4995*53ee8cc1Swenshuai.xi     V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
4996*53ee8cc1Swenshuai.xi       * Fix malloc_state bitmap array misdeclaration
4997*53ee8cc1Swenshuai.xi 
4998*53ee8cc1Swenshuai.xi     V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
4999*53ee8cc1Swenshuai.xi       * Allow tuning of FIRST_SORTED_BIN_SIZE
5000*53ee8cc1Swenshuai.xi       * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
5001*53ee8cc1Swenshuai.xi       * Better detection and support for non-contiguousness of MORECORE.
5002*53ee8cc1Swenshuai.xi         Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
5003*53ee8cc1Swenshuai.xi       * Bypass most of malloc if no frees. Thanks To Emery Berger.
5004*53ee8cc1Swenshuai.xi       * Fix freeing of old top non-contiguous chunk im sysmalloc.
5005*53ee8cc1Swenshuai.xi       * Raised default trim and map thresholds to 256K.
5006*53ee8cc1Swenshuai.xi       * Fix mmap-related #defines. Thanks to Lubos Lunak.
5007*53ee8cc1Swenshuai.xi       * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
5008*53ee8cc1Swenshuai.xi       * Branch-free bin calculation
5009*53ee8cc1Swenshuai.xi       * Default trim and mmap thresholds now 256K.
5010*53ee8cc1Swenshuai.xi 
5011*53ee8cc1Swenshuai.xi     V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
5012*53ee8cc1Swenshuai.xi       * Introduce independent_comalloc and independent_calloc.
5013*53ee8cc1Swenshuai.xi         Thanks to Michael Pachos for motivation and help.
5014*53ee8cc1Swenshuai.xi       * Make optional .h file available
5015*53ee8cc1Swenshuai.xi       * Allow > 2GB requests on 32bit systems.
5016*53ee8cc1Swenshuai.xi       * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
5017*53ee8cc1Swenshuai.xi         Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
5018*53ee8cc1Swenshuai.xi         and Anonymous.
5019*53ee8cc1Swenshuai.xi       * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
5020*53ee8cc1Swenshuai.xi         helping test this.)
5021*53ee8cc1Swenshuai.xi       * memalign: check alignment arg
5022*53ee8cc1Swenshuai.xi       * realloc: don't try to shift chunks backwards, since this
5023*53ee8cc1Swenshuai.xi         leads to  more fragmentation in some programs and doesn't
5024*53ee8cc1Swenshuai.xi         seem to help in any others.
5025*53ee8cc1Swenshuai.xi       * Collect all cases in malloc requiring system memory into sysmalloc
5026*53ee8cc1Swenshuai.xi       * Use mmap as backup to sbrk
5027*53ee8cc1Swenshuai.xi       * Place all internal state in malloc_state
5028*53ee8cc1Swenshuai.xi       * Introduce fastbins (although similar to 2.5.1)
5029*53ee8cc1Swenshuai.xi       * Many minor tunings and cosmetic improvements
5030*53ee8cc1Swenshuai.xi       * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
5031*53ee8cc1Swenshuai.xi       * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
5032*53ee8cc1Swenshuai.xi         Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
5033*53ee8cc1Swenshuai.xi       * Include errno.h to support default failure action.
5034*53ee8cc1Swenshuai.xi 
5035*53ee8cc1Swenshuai.xi     V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
5036*53ee8cc1Swenshuai.xi       * return null for negative arguments
5037*53ee8cc1Swenshuai.xi       * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
5038*53ee8cc1Swenshuai.xi          * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
5039*53ee8cc1Swenshuai.xi           (e.g. WIN32 platforms)
5040*53ee8cc1Swenshuai.xi          * Cleanup header file inclusion for WIN32 platforms
5041*53ee8cc1Swenshuai.xi          * Cleanup code to avoid Microsoft Visual C++ compiler complaints
5042*53ee8cc1Swenshuai.xi          * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
5043*53ee8cc1Swenshuai.xi            memory allocation routines
5044*53ee8cc1Swenshuai.xi          * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
5045*53ee8cc1Swenshuai.xi          * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
5046*53ee8cc1Swenshuai.xi            usage of 'assert' in non-WIN32 code
5047*53ee8cc1Swenshuai.xi          * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
5048*53ee8cc1Swenshuai.xi            avoid infinite loop
5049*53ee8cc1Swenshuai.xi       * Always call 'fREe()' rather than 'free()'
5050*53ee8cc1Swenshuai.xi 
5051*53ee8cc1Swenshuai.xi     V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
5052*53ee8cc1Swenshuai.xi       * Fixed ordering problem with boundary-stamping
5053*53ee8cc1Swenshuai.xi 
5054*53ee8cc1Swenshuai.xi     V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
5055*53ee8cc1Swenshuai.xi       * Added pvalloc, as recommended by H.J. Liu
5056*53ee8cc1Swenshuai.xi       * Added 64bit pointer support mainly from Wolfram Gloger
5057*53ee8cc1Swenshuai.xi       * Added anonymously donated WIN32 sbrk emulation
5058*53ee8cc1Swenshuai.xi       * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
5059*53ee8cc1Swenshuai.xi       * malloc_extend_top: fix mask error that caused wastage after
5060*53ee8cc1Swenshuai.xi         foreign sbrks
5061*53ee8cc1Swenshuai.xi       * Add linux mremap support code from HJ Liu
5062*53ee8cc1Swenshuai.xi 
5063*53ee8cc1Swenshuai.xi     V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
5064*53ee8cc1Swenshuai.xi       * Integrated most documentation with the code.
5065*53ee8cc1Swenshuai.xi       * Add support for mmap, with help from
5066*53ee8cc1Swenshuai.xi         Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5067*53ee8cc1Swenshuai.xi       * Use last_remainder in more cases.
5068*53ee8cc1Swenshuai.xi       * Pack bins using idea from  colin@nyx10.cs.du.edu
5069*53ee8cc1Swenshuai.xi       * Use ordered bins instead of best-fit threshhold
5070*53ee8cc1Swenshuai.xi       * Eliminate block-local decls to simplify tracing and debugging.
5071*53ee8cc1Swenshuai.xi       * Support another case of realloc via move into top
5072*53ee8cc1Swenshuai.xi       * Fix error occuring when initial sbrk_base not word-aligned.
5073*53ee8cc1Swenshuai.xi       * Rely on page size for units instead of SBRK_UNIT to
5074*53ee8cc1Swenshuai.xi         avoid surprises about sbrk alignment conventions.
5075*53ee8cc1Swenshuai.xi       * Add mallinfo, mallopt. Thanks to Raymond Nijssen
5076*53ee8cc1Swenshuai.xi         (raymond@es.ele.tue.nl) for the suggestion.
5077*53ee8cc1Swenshuai.xi       * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
5078*53ee8cc1Swenshuai.xi       * More precautions for cases where other routines call sbrk,
5079*53ee8cc1Swenshuai.xi         courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5080*53ee8cc1Swenshuai.xi       * Added macros etc., allowing use in linux libc from
5081*53ee8cc1Swenshuai.xi         H.J. Lu (hjl@gnu.ai.mit.edu)
5082*53ee8cc1Swenshuai.xi       * Inverted this history list
5083*53ee8cc1Swenshuai.xi 
5084*53ee8cc1Swenshuai.xi     V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
5085*53ee8cc1Swenshuai.xi       * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
5086*53ee8cc1Swenshuai.xi       * Removed all preallocation code since under current scheme
5087*53ee8cc1Swenshuai.xi         the work required to undo bad preallocations exceeds
5088*53ee8cc1Swenshuai.xi         the work saved in good cases for most test programs.
5089*53ee8cc1Swenshuai.xi       * No longer use return list or unconsolidated bins since
5090*53ee8cc1Swenshuai.xi         no scheme using them consistently outperforms those that don't
5091*53ee8cc1Swenshuai.xi         given above changes.
5092*53ee8cc1Swenshuai.xi       * Use best fit for very large chunks to prevent some worst-cases.
5093*53ee8cc1Swenshuai.xi       * Added some support for debugging
5094*53ee8cc1Swenshuai.xi 
5095*53ee8cc1Swenshuai.xi     V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
5096*53ee8cc1Swenshuai.xi       * Removed footers when chunks are in use. Thanks to
5097*53ee8cc1Swenshuai.xi         Paul Wilson (wilson@cs.texas.edu) for the suggestion.
5098*53ee8cc1Swenshuai.xi 
5099*53ee8cc1Swenshuai.xi     V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
5100*53ee8cc1Swenshuai.xi       * Added malloc_trim, with help from Wolfram Gloger
5101*53ee8cc1Swenshuai.xi         (wmglo@Dent.MED.Uni-Muenchen.DE).
5102*53ee8cc1Swenshuai.xi 
5103*53ee8cc1Swenshuai.xi     V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
5104*53ee8cc1Swenshuai.xi 
5105*53ee8cc1Swenshuai.xi     V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
5106*53ee8cc1Swenshuai.xi       * realloc: try to expand in both directions
5107*53ee8cc1Swenshuai.xi       * malloc: swap order of clean-bin strategy;
5108*53ee8cc1Swenshuai.xi       * realloc: only conditionally expand backwards
5109*53ee8cc1Swenshuai.xi       * Try not to scavenge used bins
5110*53ee8cc1Swenshuai.xi       * Use bin counts as a guide to preallocation
5111*53ee8cc1Swenshuai.xi       * Occasionally bin return list chunks in first scan
5112*53ee8cc1Swenshuai.xi       * Add a few optimizations from colin@nyx10.cs.du.edu
5113*53ee8cc1Swenshuai.xi 
5114*53ee8cc1Swenshuai.xi     V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
5115*53ee8cc1Swenshuai.xi       * faster bin computation & slightly different binning
5116*53ee8cc1Swenshuai.xi       * merged all consolidations to one part of malloc proper
5117*53ee8cc1Swenshuai.xi          (eliminating old malloc_find_space & malloc_clean_bin)
5118*53ee8cc1Swenshuai.xi       * Scan 2 returns chunks (not just 1)
5119*53ee8cc1Swenshuai.xi       * Propagate failure in realloc if malloc returns 0
5120*53ee8cc1Swenshuai.xi       * Add stuff to allow compilation on non-ANSI compilers
5121*53ee8cc1Swenshuai.xi           from kpv@research.att.com
5122*53ee8cc1Swenshuai.xi 
5123*53ee8cc1Swenshuai.xi     V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
5124*53ee8cc1Swenshuai.xi       * removed potential for odd address access in prev_chunk
5125*53ee8cc1Swenshuai.xi       * removed dependency on getpagesize.h
5126*53ee8cc1Swenshuai.xi       * misc cosmetics and a bit more internal documentation
5127*53ee8cc1Swenshuai.xi       * anticosmetics: mangled names in macros to evade debugger strangeness
5128*53ee8cc1Swenshuai.xi       * tested on sparc, hp-700, dec-mips, rs6000
5129*53ee8cc1Swenshuai.xi           with gcc & native cc (hp, dec only) allowing
5130*53ee8cc1Swenshuai.xi           Detlefs & Zorn comparison study (in SIGPLAN Notices.)
5131*53ee8cc1Swenshuai.xi 
5132*53ee8cc1Swenshuai.xi     Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
5133*53ee8cc1Swenshuai.xi       * Based loosely on libg++-1.2X malloc. (It retains some of the overall
5134*53ee8cc1Swenshuai.xi          structure of old version,  but most details differ.)
5135*53ee8cc1Swenshuai.xi 
5136*53ee8cc1Swenshuai.xi */
5137