xref: /utopia/UTPA2-700.0.x/modules/msos/msos/ucos/dlmalloc.c (revision 53ee8cc121a030b8d368113ac3e966b4705770ef)
1 
2 #include "stdlib.h"
3 #include "dlmalloc.h"
4 
5 #define INSECURE            1
6 #define USE_DL_PREFIX
7 //#define MORECORE MoreCore
8 #define MORECORE_CANNOT_TRIM
9 
10 #define FOOTERS             0
11 #define MMAP_CLEARS         1
12 #define HAVE_MMAP           0
13 //#define HAVE_USR_INCLUDE_MALLOC_H 1
14 
15 #define MALLOC_ALIGNMENT    16
16 
17 /*
18   This is a version (aka dlmalloc) of malloc/free/realloc written by
19   Doug Lea and released to the public domain, as explained at
20   http://creativecommons.org/licenses/publicdomain.  Send questions,
21   comments, complaints, performance data, etc to dl@cs.oswego.edu
22 
23 * Version 2.8.3 Thu Sep 22 11:16:15 2005  Doug Lea  (dl at gee)
24 
25    Note: There may be an updated version of this malloc obtainable at
26            ftp://gee.cs.oswego.edu/pub/misc/malloc.c
27          Check before installing!
28 
29 * Quickstart
30 
31   This library is all in one file to simplify the most common usage:
32   ftp it, compile it (-O3), and link it into another program. All of
33   the compile-time options default to reasonable values for use on
34   most platforms.  You might later want to step through various
35   compile-time and dynamic tuning options.
36 
37   For convenience, an include file for code using this malloc is at:
38      ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
39   You don't really need this .h file unless you call functions not
40   defined in your system include files.  The .h file contains only the
41   excerpts from this file needed for using this malloc on ANSI C/C++
42   systems, so long as you haven't changed compile-time options about
43   naming and tuning parameters.  If you do, then you can create your
44   own malloc.h that does include all settings by cutting at the point
45   indicated below. Note that you may already by default be using a C
46   library containing a malloc that is based on some version of this
47   malloc (for example in linux). You might still want to use the one
48   in this file to customize settings or to avoid overheads associated
49   with library versions.
50 
51 * Vital statistics:
52 
53   Supported pointer/size_t representation:       4 or 8 bytes
54        size_t MUST be an unsigned type of the same width as
55        pointers. (If you are using an ancient system that declares
56        size_t as a signed type, or need it to be a different width
57        than pointers, you can use a previous release of this malloc
58        (e.g. 2.7.2) supporting these.)
59 
60   Alignment:                                     8 bytes (default)
61        This suffices for nearly all current machines and C compilers.
62        However, you can define MALLOC_ALIGNMENT to be wider than this
63        if necessary (up to 128bytes), at the expense of using more space.
64 
65   Minimum overhead per allocated chunk:   4 or  8 bytes (if 4byte sizes)
66                                           8 or 16 bytes (if 8byte sizes)
67        Each malloced chunk has a hidden word of overhead holding size
68        and status information, and additional cross-check word
69        if FOOTERS is defined.
70 
71   Minimum allocated size: 4-byte ptrs:  16 bytes    (including overhead)
72                           8-byte ptrs:  32 bytes    (including overhead)
73 
74        Even a request for zero bytes (i.e., malloc(0)) returns a
75        pointer to something of the minimum allocatable size.
76        The maximum overhead wastage (i.e., number of extra bytes
77        allocated than were requested in malloc) is less than or equal
78        to the minimum size, except for requests >= mmap_threshold that
79        are serviced via mmap(), where the worst case wastage is about
80        32 bytes plus the remainder from a system page (the minimal
81        mmap unit); typically 4096 or 8192 bytes.
82 
83   Security: static-safe; optionally more or less
84        The "security" of malloc refers to the ability of malicious
85        code to accentuate the effects of errors (for example, freeing
86        space that is not currently malloc'ed or overwriting past the
87        ends of chunks) in code that calls malloc.  This malloc
88        guarantees not to modify any memory locations below the base of
89        heap, i.e., static variables, even in the presence of usage
90        errors.  The routines additionally detect most improper frees
91        and reallocs.  All this holds as long as the static bookkeeping
92        for malloc itself is not corrupted by some other means.  This
93        is only one aspect of security -- these checks do not, and
94        cannot, detect all possible programming errors.
95 
96        If FOOTERS is defined nonzero, then each allocated chunk
97        carries an additional check word to verify that it was malloced
98        from its space.  These check words are the same within each
99        execution of a program using malloc, but differ across
100        executions, so externally crafted fake chunks cannot be
101        freed. This improves security by rejecting frees/reallocs that
102        could corrupt heap memory, in addition to the checks preventing
103        writes to statics that are always on.  This may further improve
104        security at the expense of time and space overhead.  (Note that
105        FOOTERS may also be worth using with MSPACES.)
106 
107        By default detected errors cause the program to abort (calling
108        "abort()"). You can override this to instead proceed past
109        errors by defining PROCEED_ON_ERROR.  In this case, a bad free
110        has no effect, and a malloc that encounters a bad address
111        caused by user overwrites will ignore the bad address by
112        dropping pointers and indices to all known memory. This may
113        be appropriate for programs that should continue if at all
114        possible in the face of programming errors, although they may
115        run out of memory because dropped memory is never reclaimed.
116 
117        If you don't like either of these options, you can define
118        CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
119        else. And if if you are sure that your program using malloc has
120        no errors or vulnerabilities, you can define INSECURE to 1,
121        which might (or might not) provide a small performance improvement.
122 
123   Thread-safety: NOT thread-safe unless USE_LOCKS defined
124        When USE_LOCKS is defined, each public call to malloc, free,
125        etc is surrounded with either a pthread mutex or a win32
126        spinlock (depending on WIN32). This is not especially fast, and
127        can be a major bottleneck.  It is designed only to provide
128        minimal protection in concurrent environments, and to provide a
129        basis for extensions.  If you are using malloc in a concurrent
130        program, consider instead using ptmalloc, which is derived from
131        a version of this malloc. (See http://www.malloc.de).
132 
133   System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
134        This malloc can use unix sbrk or any emulation (invoked using
135        the CALL_MORECORE macro) and/or mmap/munmap or any emulation
136        (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
137        memory.  On most unix systems, it tends to work best if both
138        MORECORE and MMAP are enabled.  On Win32, it uses emulations
139        based on VirtualAlloc. It also uses common C library functions
140        like memset.
141 
142   Compliance: I believe it is compliant with the Single Unix Specification
143        (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
144        others as well.
145 
146 * Overview of algorithms
147 
148   This is not the fastest, most space-conserving, most portable, or
149   most tunable malloc ever written. However it is among the fastest
150   while also being among the most space-conserving, portable and
151   tunable.  Consistent balance across these factors results in a good
152   general-purpose allocator for malloc-intensive programs.
153 
154   In most ways, this malloc is a best-fit allocator. Generally, it
155   chooses the best-fitting existing chunk for a request, with ties
156   broken in approximately least-recently-used order. (This strategy
157   normally maintains low fragmentation.) However, for requests less
158   than 256bytes, it deviates from best-fit when there is not an
159   exactly fitting available chunk by preferring to use space adjacent
160   to that used for the previous small request, as well as by breaking
161   ties in approximately most-recently-used order. (These enhance
162   locality of series of small allocations.)  And for very large requests
163   (>= 256Kb by default), it relies on system memory mapping
164   facilities, if supported.  (This helps avoid carrying around and
165   possibly fragmenting memory used only for large chunks.)
166 
167   All operations (except malloc_stats and mallinfo) have execution
168   times that are bounded by a constant factor of the number of bits in
169   a size_t, not counting any clearing in calloc or copying in realloc,
170   or actions surrounding MORECORE and MMAP that have times
171   proportional to the number of non-contiguous regions returned by
172   system allocation routines, which is often just 1.
173 
174   The implementation is not very modular and seriously overuses
175   macros. Perhaps someday all C compilers will do as good a job
176   inlining modular code as can now be done by brute-force expansion,
177   but now, enough of them seem not to.
178 
179   Some compilers issue a lot of warnings about code that is
180   dead/unreachable only on some platforms, and also about intentional
181   uses of negation on unsigned types. All known cases of each can be
182   ignored.
183 
184   For a longer but out of date high-level description, see
185      http://gee.cs.oswego.edu/dl/html/malloc.html
186 
187 * MSPACES
188   If MSPACES is defined, then in addition to malloc, free, etc.,
189   this file also defines mspace_malloc, mspace_free, etc. These
190   are versions of malloc routines that take an "mspace" argument
191   obtained using create_mspace, to control all internal bookkeeping.
192   If ONLY_MSPACES is defined, only these versions are compiled.
193   So if you would like to use this allocator for only some allocations,
194   and your system malloc for others, you can compile with
195   ONLY_MSPACES and then do something like...
196     static mspace mymspace = create_mspace(0,0); // for example
197     #define mymalloc(bytes)  mspace_malloc(mymspace, bytes)
198 
199   (Note: If you only need one instance of an mspace, you can instead
200   use "USE_DL_PREFIX" to relabel the global malloc.)
201 
202   You can similarly create thread-local allocators by storing
203   mspaces as thread-locals. For example:
204     static __thread mspace tlms = 0;
205     void*  tlmalloc(size_t bytes) {
206       if (tlms == 0) tlms = create_mspace(0, 0);
207       return mspace_malloc(tlms, bytes);
208     }
209     void  tlfree(void* mem) { mspace_free(tlms, mem); }
210 
211   Unless FOOTERS is defined, each mspace is completely independent.
212   You cannot allocate from one and free to another (although
213   conformance is only weakly checked, so usage errors are not always
214   caught). If FOOTERS is defined, then each chunk carries around a tag
215   indicating its originating mspace, and frees are directed to their
216   originating spaces.
217 
218  -------------------------  Compile-time options ---------------------------
219 
220 Be careful in setting #define values for numerical constants of type
221 size_t. On some systems, literal values are not automatically extended
222 to size_t precision unless they are explicitly casted.
223 
224 WIN32                    default: defined if _WIN32 defined
225   Defining WIN32 sets up defaults for MS environment and compilers.
226   Otherwise defaults are for unix.
227 
228 MALLOC_ALIGNMENT         default: (size_t)8
229   Controls the minimum alignment for malloc'ed chunks.  It must be a
230   power of two and at least 8, even on machines for which smaller
231   alignments would suffice. It may be defined as larger than this
232   though. Note however that code and data structures are optimized for
233   the case of 8-byte alignment.
234 
235 MSPACES                  default: 0 (false)
236   If true, compile in support for independent allocation spaces.
237   This is only supported if HAVE_MMAP is true.
238 
239 ONLY_MSPACES             default: 0 (false)
240   If true, only compile in mspace versions, not regular versions.
241 
242 USE_LOCKS                default: 0 (false)
243   Causes each call to each public routine to be surrounded with
244   pthread or WIN32 mutex lock/unlock. (If set true, this can be
245   overridden on a per-mspace basis for mspace versions.)
246 
247 FOOTERS                  default: 0
248   If true, provide extra checking and dispatching by placing
249   information in the footers of allocated chunks. This adds
250   space and time overhead.
251 
252 INSECURE                 default: 0
253   If true, omit checks for usage errors and heap space overwrites.
254 
255 USE_DL_PREFIX            default: NOT defined
256   Causes compiler to prefix all public routines with the string 'dl'.
257   This can be useful when you only want to use this malloc in one part
258   of a program, using your regular system malloc elsewhere.
259 
260 ABORT                    default: defined as abort()
261   Defines how to abort on failed checks.  On most systems, a failed
262   check cannot die with an "assert" or even print an informative
263   message, because the underlying print routines in turn call malloc,
264   which will fail again.  Generally, the best policy is to simply call
265   abort(). It's not very useful to do more than this because many
266   errors due to overwriting will show up as address faults (null, odd
267   addresses etc) rather than malloc-triggered checks, so will also
268   abort.  Also, most compilers know that abort() does not return, so
269   can better optimize code conditionally calling it.
270 
271 PROCEED_ON_ERROR           default: defined as 0 (false)
272   Controls whether detected bad addresses cause them to bypassed
273   rather than aborting. If set, detected bad arguments to free and
274   realloc are ignored. And all bookkeeping information is zeroed out
275   upon a detected overwrite of freed heap space, thus losing the
276   ability to ever return it from malloc again, but enabling the
277   application to proceed. If PROCEED_ON_ERROR is defined, the
278   static variable malloc_corruption_error_count is compiled in
279   and can be examined to see if errors have occurred. This option
280   generates slower code than the default abort policy.
281 
282 DEBUG                    default: NOT defined
283   The DEBUG setting is mainly intended for people trying to modify
284   this code or diagnose problems when porting to new platforms.
285   However, it may also be able to better isolate user errors than just
286   using runtime checks.  The assertions in the check routines spell
287   out in more detail the assumptions and invariants underlying the
288   algorithms.  The checking is fairly extensive, and will slow down
289   execution noticeably. Calling malloc_stats or mallinfo with DEBUG
290   set will attempt to check every non-mmapped allocated and free chunk
291   in the course of computing the summaries.
292 
293 ABORT_ON_ASSERT_FAILURE   default: defined as 1 (true)
294   Debugging assertion failures can be nearly impossible if your
295   version of the assert macro causes malloc to be called, which will
296   lead to a cascade of further failures, blowing the runtime stack.
297   ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
298   which will usually make debugging easier.
299 
300 MALLOC_FAILURE_ACTION     default: sets errno to ENOMEM, or no-op on win32
301   The action to take before "return 0" when malloc fails to be able to
302   return memory because there is none available.
303 
304 HAVE_MORECORE             default: 1 (true) unless win32 or ONLY_MSPACES
305   True if this system supports sbrk or an emulation of it.
306 
307 MORECORE                  default: sbrk
308   The name of the sbrk-style system routine to call to obtain more
309   memory.  See below for guidance on writing custom MORECORE
310   functions. The type of the argument to sbrk/MORECORE varies across
311   systems.  It cannot be size_t, because it supports negative
312   arguments, so it is normally the signed type of the same width as
313   size_t (sometimes declared as "intptr_t").  It doesn't much matter
314   though. Internally, we only call it with arguments less than half
315   the max value of a size_t, which should work across all reasonable
316   possibilities, although sometimes generating compiler warnings.  See
317   near the end of this file for guidelines for creating a custom
318   version of MORECORE.
319 
320 MORECORE_CONTIGUOUS       default: 1 (true)
321   If true, take advantage of fact that consecutive calls to MORECORE
322   with positive arguments always return contiguous increasing
323   addresses.  This is true of unix sbrk. It does not hurt too much to
324   set it true anyway, since malloc copes with non-contiguities.
325   Setting it false when definitely non-contiguous saves time
326   and possibly wasted space it would take to discover this though.
327 
328 MORECORE_CANNOT_TRIM      default: NOT defined
329   True if MORECORE cannot release space back to the system when given
330   negative arguments. This is generally necessary only if you are
331   using a hand-crafted MORECORE function that cannot handle negative
332   arguments.
333 
334 HAVE_MMAP                 default: 1 (true)
335   True if this system supports mmap or an emulation of it.  If so, and
336   HAVE_MORECORE is not true, MMAP is used for all system
337   allocation. If set and HAVE_MORECORE is true as well, MMAP is
338   primarily used to directly allocate very large blocks. It is also
339   used as a backup strategy in cases where MORECORE fails to provide
340   space from system. Note: A single call to MUNMAP is assumed to be
341   able to unmap memory that may have be allocated using multiple calls
342   to MMAP, so long as they are adjacent.
343 
344 HAVE_MREMAP               default: 1 on linux, else 0
345   If true realloc() uses mremap() to re-allocate large blocks and
346   extend or shrink allocation spaces.
347 
348 MMAP_CLEARS               default: 1 on unix
349   True if mmap clears memory so calloc doesn't need to. This is true
350   for standard unix mmap using /dev/zero.
351 
352 USE_BUILTIN_FFS            default: 0 (i.e., not used)
353   Causes malloc to use the builtin ffs() function to compute indices.
354   Some compilers may recognize and intrinsify ffs to be faster than the
355   supplied C version. Also, the case of x86 using gcc is special-cased
356   to an asm instruction, so is already as fast as it can be, and so
357   this setting has no effect. (On most x86s, the asm version is only
358   slightly faster than the C version.)
359 
360 malloc_getpagesize         default: derive from system includes, or 4096.
361   The system page size. To the extent possible, this malloc manages
362   memory from the system in page-size units.  This may be (and
363   usually is) a function rather than a constant. This is ignored
364   if WIN32, where page size is determined using getSystemInfo during
365   initialization.
366 
367 USE_DEV_RANDOM             default: 0 (i.e., not used)
368   Causes malloc to use /dev/random to initialize secure magic seed for
369   stamping footers. Otherwise, the current time is used.
370 
371 NO_MALLINFO                default: 0
372   If defined, don't compile "mallinfo". This can be a simple way
373   of dealing with mismatches between system declarations and
374   those in this file.
375 
376 MALLINFO_FIELD_TYPE        default: size_t
377   The type of the fields in the mallinfo struct. This was originally
378   defined as "int" in SVID etc, but is more usefully defined as
379   size_t. The value is used only if  HAVE_USR_INCLUDE_MALLOC_H is not set
380 
381 REALLOC_ZERO_BYTES_FREES    default: not defined
382   This should be set if a call to realloc with zero bytes should
383   be the same as a call to free. Some people think it should. Otherwise,
384   since this malloc returns a unique pointer for malloc(0), so does
385   realloc(p, 0).
386 
387 LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
388 LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H,  LACKS_ERRNO_H
389 LACKS_STDLIB_H                default: NOT defined unless on WIN32
390   Define these if your system does not have these header files.
391   You might need to manually insert some of the declarations they provide.
392 
393 DEFAULT_GRANULARITY        default: page size if MORECORE_CONTIGUOUS,
394                                 system_info.dwAllocationGranularity in WIN32,
395                                 otherwise 64K.
396       Also settable using mallopt(M_GRANULARITY, x)
397   The unit for allocating and deallocating memory from the system.  On
398   most systems with contiguous MORECORE, there is no reason to
399   make this more than a page. However, systems with MMAP tend to
400   either require or encourage larger granularities.  You can increase
401   this value to prevent system allocation functions to be called so
402   often, especially if they are slow.  The value must be at least one
403   page and must be a power of two.  Setting to 0 causes initialization
404   to either page size or win32 region size.  (Note: In previous
405   versions of malloc, the equivalent of this option was called
406   "TOP_PAD")
407 
408 DEFAULT_TRIM_THRESHOLD    default: 2MB
409       Also settable using mallopt(M_TRIM_THRESHOLD, x)
410   The maximum amount of unused top-most memory to keep before
411   releasing via malloc_trim in free().  Automatic trimming is mainly
412   useful in long-lived programs using contiguous MORECORE.  Because
413   trimming via sbrk can be slow on some systems, and can sometimes be
414   wasteful (in cases where programs immediately afterward allocate
415   more large chunks) the value should be high enough so that your
416   overall system performance would improve by releasing this much
417   memory.  As a rough guide, you might set to a value close to the
418   average size of a process (program) running on your system.
419   Releasing this much memory would allow such a process to run in
420   memory.  Generally, it is worth tuning trim thresholds when a
421   program undergoes phases where several large chunks are allocated
422   and released in ways that can reuse each other's storage, perhaps
423   mixed with phases where there are no such chunks at all. The trim
424   value must be greater than page size to have any useful effect.  To
425   disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
426   some people use of mallocing a huge space and then freeing it at
427   program startup, in an attempt to reserve system memory, doesn't
428   have the intended effect under automatic trimming, since that memory
429   will immediately be returned to the system.
430 
431 DEFAULT_MMAP_THRESHOLD       default: 256K
432       Also settable using mallopt(M_MMAP_THRESHOLD, x)
433   The request size threshold for using MMAP to directly service a
434   request. Requests of at least this size that cannot be allocated
435   using already-existing space will be serviced via mmap.  (If enough
436   normal freed space already exists it is used instead.)  Using mmap
437   segregates relatively large chunks of memory so that they can be
438   individually obtained and released from the host system. A request
439   serviced through mmap is never reused by any other request (at least
440   not directly; the system may just so happen to remap successive
441   requests to the same locations).  Segregating space in this way has
442   the benefits that: Mmapped space can always be individually released
443   back to the system, which helps keep the system level memory demands
444   of a long-lived program low.  Also, mapped memory doesn't become
445   `locked' between other chunks, as can happen with normally allocated
446   chunks, which means that even trimming via malloc_trim would not
447   release them.  However, it has the disadvantage that the space
448   cannot be reclaimed, consolidated, and then used to service later
449   requests, as happens with normal chunks.  The advantages of mmap
450   nearly always outweigh disadvantages for "large" chunks, but the
451   value of "large" may vary across systems.  The default is an
452   empirically derived value that works well in most systems. You can
453   disable mmap by setting to MAX_SIZE_T.
454 
455 */
456 
457 #ifndef WIN32
458 #ifdef _WIN32
459 #define WIN32 1
460 #endif  /* _WIN32 */
461 #endif  /* WIN32 */
462 #ifdef WIN32
463 #define WIN32_LEAN_AND_MEAN
464 #include <windows.h>
465 #define HAVE_MMAP 1
466 #define HAVE_MORECORE 0
467 #define LACKS_UNISTD_H
468 #define LACKS_SYS_PARAM_H
469 #define LACKS_SYS_MMAN_H
470 #define LACKS_STRING_H
471 #define LACKS_STRINGS_H
472 #define LACKS_SYS_TYPES_H
473 #define LACKS_ERRNO_H
474 #define MALLOC_FAILURE_ACTION
475 #define MMAP_CLEARS 0 /* WINCE and some others apparently don't clear */
476 #endif  /* WIN32 */
477 
478 #if defined(DARWIN) || defined(_DARWIN)
479 /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
480 #ifndef HAVE_MORECORE
481 #define HAVE_MORECORE 0
482 #define HAVE_MMAP 1
483 #endif  /* HAVE_MORECORE */
484 #endif  /* DARWIN */
485 
486 #ifndef LACKS_SYS_TYPES_H
487 #include <sys/types.h>  /* For size_t */
488 #endif  /* LACKS_SYS_TYPES_H */
489 
490 /* The maximum possible size_t value has all bits set */
491 #define MAX_SIZE_T           (~(size_t)0)
492 
493 #ifndef ONLY_MSPACES
494 #define ONLY_MSPACES 0
495 #endif  /* ONLY_MSPACES */
496 #ifndef MSPACES
497 #if ONLY_MSPACES
498 #define MSPACES 1
499 #else   /* ONLY_MSPACES */
500 #define MSPACES 0
501 #endif  /* ONLY_MSPACES */
502 #endif  /* MSPACES */
503 #ifndef MALLOC_ALIGNMENT
504 #define MALLOC_ALIGNMENT 8  //((size_t)8U)
505 #endif  /* MALLOC_ALIGNMENT */
506 #ifndef FOOTERS
507 #define FOOTERS 0
508 #endif  /* FOOTERS */
509 #ifndef ABORT
510 #define ABORT  abort()
511 #endif  /* ABORT */
512 #ifndef ABORT_ON_ASSERT_FAILURE
513 #define ABORT_ON_ASSERT_FAILURE 1
514 #endif  /* ABORT_ON_ASSERT_FAILURE */
515 #ifndef PROCEED_ON_ERROR
516 #define PROCEED_ON_ERROR 0
517 #endif  /* PROCEED_ON_ERROR */
518 #ifndef USE_LOCKS
519 #define USE_LOCKS 1
520 #endif  /* USE_LOCKS */
521 #ifndef INSECURE
522 #define INSECURE 0
523 #endif  /* INSECURE */
524 #ifndef HAVE_MMAP
525 #define HAVE_MMAP 1
526 #endif  /* HAVE_MMAP */
527 #ifndef MMAP_CLEARS
528 #define MMAP_CLEARS 1
529 #endif  /* MMAP_CLEARS */
530 #ifndef HAVE_MREMAP
531 #ifdef linux
532 #define HAVE_MREMAP 1
533 #else   /* linux */
534 #define HAVE_MREMAP 0
535 #endif  /* linux */
536 #endif  /* HAVE_MREMAP */
537 #ifndef MALLOC_FAILURE_ACTION
538 #define MALLOC_FAILURE_ACTION  errno = ENOMEM;
539 #endif  /* MALLOC_FAILURE_ACTION */
540 #ifndef HAVE_MORECORE
541 #if ONLY_MSPACES
542 #define HAVE_MORECORE 0
543 #else   /* ONLY_MSPACES */
544 #define HAVE_MORECORE 1
545 #endif  /* ONLY_MSPACES */
546 #endif  /* HAVE_MORECORE */
547 #if !HAVE_MORECORE
548 #define MORECORE_CONTIGUOUS 0
549 #else   /* !HAVE_MORECORE */
550 #ifndef MORECORE
551 #define MORECORE sbrk
552 #endif  /* MORECORE */
553 #ifndef MORECORE_CONTIGUOUS
554 #define MORECORE_CONTIGUOUS 1
555 #endif  /* MORECORE_CONTIGUOUS */
556 #endif  /* HAVE_MORECORE */
557 #ifndef DEFAULT_GRANULARITY
558 #if MORECORE_CONTIGUOUS
559 #define DEFAULT_GRANULARITY (0)  /* 0 means to compute in init_mparams */
560 #else   /* MORECORE_CONTIGUOUS */
561 #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
562 #endif  /* MORECORE_CONTIGUOUS */
563 #endif  /* DEFAULT_GRANULARITY */
564 #ifndef DEFAULT_TRIM_THRESHOLD
565 #ifndef MORECORE_CANNOT_TRIM
566 #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
567 #else   /* MORECORE_CANNOT_TRIM */
568 #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
569 #endif  /* MORECORE_CANNOT_TRIM */
570 #endif  /* DEFAULT_TRIM_THRESHOLD */
571 #ifndef DEFAULT_MMAP_THRESHOLD
572 #if HAVE_MMAP
573 #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
574 #else   /* HAVE_MMAP */
575 #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
576 #endif  /* HAVE_MMAP */
577 #endif  /* DEFAULT_MMAP_THRESHOLD */
578 #ifndef USE_BUILTIN_FFS
579 #define USE_BUILTIN_FFS 0
580 #endif  /* USE_BUILTIN_FFS */
581 #ifndef USE_DEV_RANDOM
582 #define USE_DEV_RANDOM 0
583 #endif  /* USE_DEV_RANDOM */
584 #ifndef NO_MALLINFO
585 #define NO_MALLINFO 0
586 #endif  /* NO_MALLINFO */
587 #ifndef MALLINFO_FIELD_TYPE
588 #define MALLINFO_FIELD_TYPE size_t
589 #endif  /* MALLINFO_FIELD_TYPE */
590 
591 /*
592   mallopt tuning options.  SVID/XPG defines four standard parameter
593   numbers for mallopt, normally defined in malloc.h.  None of these
594   are used in this malloc, so setting them has no effect. But this
595   malloc does support the following options.
596 */
597 
598 #define M_TRIM_THRESHOLD     (-1)
599 #define M_GRANULARITY        (-2)
600 #define M_MMAP_THRESHOLD     (-3)
601 
602 /* ------------------------ Mallinfo declarations ------------------------ */
603 
604 #if !NO_MALLINFO
605 /*
606   This version of malloc supports the standard SVID/XPG mallinfo
607   routine that returns a struct containing usage properties and
608   statistics. It should work on any system that has a
609   /usr/include/malloc.h defining struct mallinfo.  The main
610   declaration needed is the mallinfo struct that is returned (by-copy)
611   by mallinfo().  The malloinfo struct contains a bunch of fields that
612   are not even meaningful in this version of malloc.  These fields are
613   are instead filled by mallinfo() with other numbers that might be of
614   interest.
615 
616   HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
617   /usr/include/malloc.h file that includes a declaration of struct
618   mallinfo.  If so, it is included; else a compliant version is
619   declared below.  These must be precisely the same for mallinfo() to
620   work.  The original SVID version of this struct, defined on most
621   systems with mallinfo, declares all fields as ints. But some others
622   define as unsigned long. If your system defines the fields using a
623   type of different width than listed here, you MUST #include your
624   system version and #define HAVE_USR_INCLUDE_MALLOC_H.
625 */
626 
627 /* #define HAVE_USR_INCLUDE_MALLOC_H */
628 
629 #ifdef HAVE_USR_INCLUDE_MALLOC_H
630 #include "/usr/include/malloc.h"
631 #else /* HAVE_USR_INCLUDE_MALLOC_H */
632 
633 struct mallinfo {
634   MALLINFO_FIELD_TYPE arena;    /* non-mmapped space allocated from system */
635   MALLINFO_FIELD_TYPE ordblks;  /* number of free chunks */
636   MALLINFO_FIELD_TYPE smblks;   /* always 0 */
637   MALLINFO_FIELD_TYPE hblks;    /* always 0 */
638   MALLINFO_FIELD_TYPE hblkhd;   /* space in mmapped regions */
639   MALLINFO_FIELD_TYPE usmblks;  /* maximum total allocated space */
640   MALLINFO_FIELD_TYPE fsmblks;  /* always 0 */
641   MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
642   MALLINFO_FIELD_TYPE fordblks; /* total free space */
643   MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
644 };
645 
646 #endif /* HAVE_USR_INCLUDE_MALLOC_H */
647 #endif /* NO_MALLINFO */
648 
649 #ifdef __cplusplus
650 extern "C" {
651 #endif /* __cplusplus */
652 
653 #if !ONLY_MSPACES
654 
655 /* ------------------- Declarations of public routines ------------------- */
656 
657 #ifndef USE_DL_PREFIX
658 #define dlcalloc               calloc
659 #define dlfree                 free
660 #define dlmalloc               malloc
661 #define dlmemalign             memalign
662 #define dlrealloc              realloc
663 #define dlvalloc               valloc
664 #define dlpvalloc              pvalloc
665 #define dlmallinfo             mallinfo
666 #define dlmallopt              mallopt
667 #define dlmalloc_trim          malloc_trim
668 #define dlmalloc_stats         malloc_stats
669 #define dlmalloc_usable_size   malloc_usable_size
670 #define dlmalloc_footprint     malloc_footprint
671 #define dlmalloc_max_footprint malloc_max_footprint
672 #define dlindependent_calloc   independent_calloc
673 #define dlindependent_comalloc independent_comalloc
674 #endif /* USE_DL_PREFIX */
675 
676 
677 /*
678   malloc(size_t n)
679   Returns a pointer to a newly allocated chunk of at least n bytes, or
680   null if no space is available, in which case errno is set to ENOMEM
681   on ANSI C systems.
682 
683   If n is zero, malloc returns a minimum-sized chunk. (The minimum
684   size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
685   systems.)  Note that size_t is an unsigned type, so calls with
686   arguments that would be negative if signed are interpreted as
687   requests for huge amounts of space, which will often fail. The
688   maximum supported value of n differs across systems, but is in all
689   cases less than the maximum representable value of a size_t.
690 */
691 void* dlmalloc(size_t);
692 
693 /*
694   free(void* p)
695   Releases the chunk of memory pointed to by p, that had been previously
696   allocated using malloc or a related routine such as realloc.
697   It has no effect if p is null. If p was not malloced or already
698   freed, free(p) will by default cause the current program to abort.
699 */
700 void  dlfree(void*);
701 
702 /*
703   calloc(size_t n_elements, size_t element_size);
704   Returns a pointer to n_elements * element_size bytes, with all locations
705   set to zero.
706 */
707 void* dlcalloc(size_t, size_t);
708 
709 /*
710   realloc(void* p, size_t n)
711   Returns a pointer to a chunk of size n that contains the same data
712   as does chunk p up to the minimum of (n, p's size) bytes, or null
713   if no space is available.
714 
715   The returned pointer may or may not be the same as p. The algorithm
716   prefers extending p in most cases when possible, otherwise it
717   employs the equivalent of a malloc-copy-free sequence.
718 
719   If p is null, realloc is equivalent to malloc.
720 
721   If space is not available, realloc returns null, errno is set (if on
722   ANSI) and p is NOT freed.
723 
724   if n is for fewer bytes than already held by p, the newly unused
725   space is lopped off and freed if possible.  realloc with a size
726   argument of zero (re)allocates a minimum-sized chunk.
727 
728   The old unix realloc convention of allowing the last-free'd chunk
729   to be used as an argument to realloc is not supported.
730 */
731 
732 void* dlrealloc(void*, size_t);
733 
734 /*
735   memalign(size_t alignment, size_t n);
736   Returns a pointer to a newly allocated chunk of n bytes, aligned
737   in accord with the alignment argument.
738 
739   The alignment argument should be a power of two. If the argument is
740   not a power of two, the nearest greater power is used.
741   8-byte alignment is guaranteed by normal malloc calls, so don't
742   bother calling memalign with an argument of 8 or less.
743 
744   Overreliance on memalign is a sure way to fragment space.
745 */
746 void* dlmemalign(size_t, size_t);
747 
748 /*
749   valloc(size_t n);
750   Equivalent to memalign(pagesize, n), where pagesize is the page
751   size of the system. If the pagesize is unknown, 4096 is used.
752 */
753 void* dlvalloc(size_t);
754 
755 /*
756   mallopt(int parameter_number, int parameter_value)
757   Sets tunable parameters The format is to provide a
758   (parameter-number, parameter-value) pair.  mallopt then sets the
759   corresponding parameter to the argument value if it can (i.e., so
760   long as the value is meaningful), and returns 1 if successful else
761   0.  SVID/XPG/ANSI defines four standard param numbers for mallopt,
762   normally defined in malloc.h.  None of these are use in this malloc,
763   so setting them has no effect. But this malloc also supports other
764   options in mallopt. See below for details.  Briefly, supported
765   parameters are as follows (listed defaults are for "typical"
766   configurations).
767 
768   Symbol            param #  default    allowed param values
769   M_TRIM_THRESHOLD     -1   2*1024*1024   any   (MAX_SIZE_T disables)
770   M_GRANULARITY        -2     page size   any power of 2 >= page size
771   M_MMAP_THRESHOLD     -3      256*1024   any   (or 0 if no MMAP support)
772 */
773 int dlmallopt(int, int);
774 
775 /*
776   malloc_footprint();
777   Returns the number of bytes obtained from the system.  The total
778   number of bytes allocated by malloc, realloc etc., is less than this
779   value. Unlike mallinfo, this function returns only a precomputed
780   result, so can be called frequently to monitor memory consumption.
781   Even if locks are otherwise defined, this function does not use them,
782   so results might not be up to date.
783 */
784 size_t dlmalloc_footprint(void);
785 
786 /*
787   malloc_max_footprint();
788   Returns the maximum number of bytes obtained from the system. This
789   value will be greater than current footprint if deallocated space
790   has been reclaimed by the system. The peak number of bytes allocated
791   by malloc, realloc etc., is less than this value. Unlike mallinfo,
792   this function returns only a precomputed result, so can be called
793   frequently to monitor memory consumption.  Even if locks are
794   otherwise defined, this function does not use them, so results might
795   not be up to date.
796 */
797 size_t dlmalloc_max_footprint(void);
798 
799 #if !NO_MALLINFO
800 /*
801   mallinfo()
802   Returns (by copy) a struct containing various summary statistics:
803 
804   arena:     current total non-mmapped bytes allocated from system
805   ordblks:   the number of free chunks
806   smblks:    always zero.
807   hblks:     current number of mmapped regions
808   hblkhd:    total bytes held in mmapped regions
809   usmblks:   the maximum total allocated space. This will be greater
810                 than current total if trimming has occurred.
811   fsmblks:   always zero
812   uordblks:  current total allocated space (normal or mmapped)
813   fordblks:  total free space
814   keepcost:  the maximum number of bytes that could ideally be released
815                back to system via malloc_trim. ("ideally" means that
816                it ignores page restrictions etc.)
817 
818   Because these fields are ints, but internal bookkeeping may
819   be kept as longs, the reported values may wrap around zero and
820   thus be inaccurate.
821 */
822 struct mallinfo dlmallinfo(void);
823 #endif /* NO_MALLINFO */
824 
825 /*
826   independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
827 
828   independent_calloc is similar to calloc, but instead of returning a
829   single cleared space, it returns an array of pointers to n_elements
830   independent elements that can hold contents of size elem_size, each
831   of which starts out cleared, and can be independently freed,
832   realloc'ed etc. The elements are guaranteed to be adjacently
833   allocated (this is not guaranteed to occur with multiple callocs or
834   mallocs), which may also improve cache locality in some
835   applications.
836 
837   The "chunks" argument is optional (i.e., may be null, which is
838   probably the most typical usage). If it is null, the returned array
839   is itself dynamically allocated and should also be freed when it is
840   no longer needed. Otherwise, the chunks array must be of at least
841   n_elements in length. It is filled in with the pointers to the
842   chunks.
843 
844   In either case, independent_calloc returns this pointer array, or
845   null if the allocation failed.  If n_elements is zero and "chunks"
846   is null, it returns a chunk representing an array with zero elements
847   (which should be freed if not wanted).
848 
849   Each element must be individually freed when it is no longer
850   needed. If you'd like to instead be able to free all at once, you
851   should instead use regular calloc and assign pointers into this
852   space to represent elements.  (In this case though, you cannot
853   independently free elements.)
854 
855   independent_calloc simplifies and speeds up implementations of many
856   kinds of pools.  It may also be useful when constructing large data
857   structures that initially have a fixed number of fixed-sized nodes,
858   but the number is not known at compile time, and some of the nodes
859   may later need to be freed. For example:
860 
861   struct Node { int item; struct Node* next; };
862 
863   struct Node* build_list() {
864     struct Node** pool;
865     int n = read_number_of_nodes_needed();
866     if (n <= 0) return 0;
867     pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
868     if (pool == 0) die();
869     // organize into a linked list...
870     struct Node* first = pool[0];
871     for (i = 0; i < n-1; ++i)
872       pool[i]->next = pool[i+1];
873     free(pool);     // Can now free the array (or not, if it is needed later)
874     return first;
875   }
876 */
877 void** dlindependent_calloc(size_t, size_t, void**);
878 
879 /*
880   independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
881 
882   independent_comalloc allocates, all at once, a set of n_elements
883   chunks with sizes indicated in the "sizes" array.    It returns
884   an array of pointers to these elements, each of which can be
885   independently freed, realloc'ed etc. The elements are guaranteed to
886   be adjacently allocated (this is not guaranteed to occur with
887   multiple callocs or mallocs), which may also improve cache locality
888   in some applications.
889 
890   The "chunks" argument is optional (i.e., may be null). If it is null
891   the returned array is itself dynamically allocated and should also
892   be freed when it is no longer needed. Otherwise, the chunks array
893   must be of at least n_elements in length. It is filled in with the
894   pointers to the chunks.
895 
896   In either case, independent_comalloc returns this pointer array, or
897   null if the allocation failed.  If n_elements is zero and chunks is
898   null, it returns a chunk representing an array with zero elements
899   (which should be freed if not wanted).
900 
901   Each element must be individually freed when it is no longer
902   needed. If you'd like to instead be able to free all at once, you
903   should instead use a single regular malloc, and assign pointers at
904   particular offsets in the aggregate space. (In this case though, you
905   cannot independently free elements.)
906 
907   independent_comallac differs from independent_calloc in that each
908   element may have a different size, and also that it does not
909   automatically clear elements.
910 
911   independent_comalloc can be used to speed up allocation in cases
912   where several structs or objects must always be allocated at the
913   same time.  For example:
914 
915   struct Head { ... }
916   struct Foot { ... }
917 
918   void send_message(char* msg) {
919     int msglen = strlen(msg);
920     size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
921     void* chunks[3];
922     if (independent_comalloc(3, sizes, chunks) == 0)
923       die();
924     struct Head* head = (struct Head*)(chunks[0]);
925     char*        body = (char*)(chunks[1]);
926     struct Foot* foot = (struct Foot*)(chunks[2]);
927     // ...
928   }
929 
930   In general though, independent_comalloc is worth using only for
931   larger values of n_elements. For small values, you probably won't
932   detect enough difference from series of malloc calls to bother.
933 
934   Overuse of independent_comalloc can increase overall memory usage,
935   since it cannot reuse existing noncontiguous small chunks that
936   might be available for some of the elements.
937 */
938 void** dlindependent_comalloc(size_t, size_t*, void**);
939 
940 
941 /*
942   pvalloc(size_t n);
943   Equivalent to valloc(minimum-page-that-holds(n)), that is,
944   round up n to nearest pagesize.
945  */
946 void*  dlpvalloc(size_t);
947 
948 /*
949   malloc_trim(size_t pad);
950 
951   If possible, gives memory back to the system (via negative arguments
952   to sbrk) if there is unused memory at the `high' end of the malloc
953   pool or in unused MMAP segments. You can call this after freeing
954   large blocks of memory to potentially reduce the system-level memory
955   requirements of a program. However, it cannot guarantee to reduce
956   memory. Under some allocation patterns, some large free blocks of
957   memory will be locked between two used chunks, so they cannot be
958   given back to the system.
959 
960   The `pad' argument to malloc_trim represents the amount of free
961   trailing space to leave untrimmed. If this argument is zero, only
962   the minimum amount of memory to maintain internal data structures
963   will be left. Non-zero arguments can be supplied to maintain enough
964   trailing space to service future expected allocations without having
965   to re-obtain memory from the system.
966 
967   Malloc_trim returns 1 if it actually released any memory, else 0.
968 */
969 int  dlmalloc_trim(size_t);
970 
971 /*
972   malloc_usable_size(void* p);
973 
974   Returns the number of bytes you can actually use in
975   an allocated chunk, which may be more than you requested (although
976   often not) due to alignment and minimum size constraints.
977   You can use this many bytes without worrying about
978   overwriting other allocated objects. This is not a particularly great
979   programming practice. malloc_usable_size can be more useful in
980   debugging and assertions, for example:
981 
982   p = malloc(n);
983   assert(malloc_usable_size(p) >= 256);
984 */
985 size_t dlmalloc_usable_size(void*);
986 
987 /*
988   malloc_stats();
989   Prints on stderr the amount of space obtained from the system (both
990   via sbrk and mmap), the maximum amount (which may be more than
991   current if malloc_trim and/or munmap got called), and the current
992   number of bytes allocated via malloc (or realloc, etc) but not yet
993   freed. Note that this is the number of bytes allocated, not the
994   number requested. It will be larger than the number requested
995   because of alignment and bookkeeping overhead. Because it includes
996   alignment wastage as being in use, this figure may be greater than
997   zero even when no user-level chunks are allocated.
998 
999   The reported current and maximum system memory can be inaccurate if
1000   a program makes other calls to system memory allocation functions
1001   (normally sbrk) outside of malloc.
1002 
1003   malloc_stats prints only the most commonly interesting statistics.
1004   More information can be obtained by calling mallinfo.
1005 */
1006 void  dlmalloc_stats(void);
1007 
1008 #endif /* ONLY_MSPACES */
1009 
1010 #if MSPACES
1011 
1012 /*
1013   mspace is an opaque type representing an independent
1014   region of space that supports mspace_malloc, etc.
1015 */
1016 typedef void* mspace;
1017 
1018 /*
1019   create_mspace creates and returns a new independent space with the
1020   given initial capacity, or, if 0, the default granularity size.  It
1021   returns null if there is no system memory available to create the
1022   space.  If argument locked is non-zero, the space uses a separate
1023   lock to control access. The capacity of the space will grow
1024   dynamically as needed to service mspace_malloc requests.  You can
1025   control the sizes of incremental increases of this space by
1026   compiling with a different DEFAULT_GRANULARITY or dynamically
1027   setting with mallopt(M_GRANULARITY, value).
1028 */
1029 mspace create_mspace(size_t capacity, int locked);
1030 
1031 /*
1032   destroy_mspace destroys the given space, and attempts to return all
1033   of its memory back to the system, returning the total number of
1034   bytes freed. After destruction, the results of access to all memory
1035   used by the space become undefined.
1036 */
1037 size_t destroy_mspace(mspace msp);
1038 
1039 /*
1040   create_mspace_with_base uses the memory supplied as the initial base
1041   of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1042   space is used for bookkeeping, so the capacity must be at least this
1043   large. (Otherwise 0 is returned.) When this initial space is
1044   exhausted, additional memory will be obtained from the system.
1045   Destroying this space will deallocate all additionally allocated
1046   space (if possible) but not the initial base.
1047 */
1048 mspace create_mspace_with_base(void* base, size_t capacity, int locked);
1049 
1050 /*
1051   mspace_malloc behaves as malloc, but operates within
1052   the given space.
1053 */
1054 void* mspace_malloc(mspace msp, size_t bytes);
1055 
1056 /*
1057   mspace_free behaves as free, but operates within
1058   the given space.
1059 
1060   If compiled with FOOTERS==1, mspace_free is not actually needed.
1061   free may be called instead of mspace_free because freed chunks from
1062   any space are handled by their originating spaces.
1063 */
1064 void mspace_free(mspace msp, void* mem);
1065 
1066 /*
1067   mspace_realloc behaves as realloc, but operates within
1068   the given space.
1069 
1070   If compiled with FOOTERS==1, mspace_realloc is not actually
1071   needed.  realloc may be called instead of mspace_realloc because
1072   realloced chunks from any space are handled by their originating
1073   spaces.
1074 */
1075 void* mspace_realloc(mspace msp, void* mem, size_t newsize);
1076 
1077 /*
1078   mspace_calloc behaves as calloc, but operates within
1079   the given space.
1080 */
1081 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
1082 
1083 /*
1084   mspace_memalign behaves as memalign, but operates within
1085   the given space.
1086 */
1087 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
1088 
1089 /*
1090   mspace_independent_calloc behaves as independent_calloc, but
1091   operates within the given space.
1092 */
1093 void** mspace_independent_calloc(mspace msp, size_t n_elements,
1094                                  size_t elem_size, void* chunks[]);
1095 
1096 /*
1097   mspace_independent_comalloc behaves as independent_comalloc, but
1098   operates within the given space.
1099 */
1100 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
1101                                    size_t sizes[], void* chunks[]);
1102 
1103 /*
1104   mspace_footprint() returns the number of bytes obtained from the
1105   system for this space.
1106 */
1107 size_t mspace_footprint(mspace msp);
1108 
1109 /*
1110   mspace_max_footprint() returns the peak number of bytes obtained from the
1111   system for this space.
1112 */
1113 size_t mspace_max_footprint(mspace msp);
1114 
1115 
1116 #if !NO_MALLINFO
1117 /*
1118   mspace_mallinfo behaves as mallinfo, but reports properties of
1119   the given space.
1120 */
1121 struct mallinfo mspace_mallinfo(mspace msp);
1122 #endif /* NO_MALLINFO */
1123 
1124 /*
1125   mspace_malloc_stats behaves as malloc_stats, but reports
1126   properties of the given space.
1127 */
1128 void mspace_malloc_stats(mspace msp);
1129 
1130 /*
1131   mspace_trim behaves as malloc_trim, but
1132   operates within the given space.
1133 */
1134 int mspace_trim(mspace msp, size_t pad);
1135 
1136 /*
1137   An alias for mallopt.
1138 */
1139 int mspace_mallopt(int, int);
1140 
1141 #endif /* MSPACES */
1142 
1143 #ifdef __cplusplus
1144 };  /* end of extern "C" */
1145 #endif /* __cplusplus */
1146 
1147 /*
1148   ========================================================================
1149   To make a fully customizable malloc.h header file, cut everything
1150   above this line, put into file malloc.h, edit to suit, and #include it
1151   on the next line, as well as in programs that use this malloc.
1152   ========================================================================
1153 */
1154 
1155 /* #include "malloc.h" */
1156 
1157 /*------------------------------ internal #includes ---------------------- */
1158 
1159 #ifdef WIN32
1160 #pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1161 #endif /* WIN32 */
1162 
1163 #include <stdio.h>       /* for printing in malloc_stats */
1164 
1165 #ifndef LACKS_ERRNO_H
1166 #include <errno.h>       /* for MALLOC_FAILURE_ACTION */
1167 #endif /* LACKS_ERRNO_H */
1168 #if FOOTERS
1169 #include <time.h>        /* for magic initialization */
1170 #endif /* FOOTERS */
1171 #ifndef LACKS_STDLIB_H
1172 //#include <stdlib.h>      /* for abort() */
1173 void abort(void);
1174 #endif /* LACKS_STDLIB_H */
1175 #ifdef DEBUG
1176 #if ABORT_ON_ASSERT_FAILURE
1177 #define assert(x) if(!(x)) ABORT
1178 #else /* ABORT_ON_ASSERT_FAILURE */
1179 #include <assert.h>
1180 #endif /* ABORT_ON_ASSERT_FAILURE */
1181 #else  /* DEBUG */
1182 #define assert(x)
1183 #endif /* DEBUG */
1184 #ifndef LACKS_STRING_H
1185 #include <string.h>      /* for memset etc */
1186 #endif  /* LACKS_STRING_H */
1187 #if USE_BUILTIN_FFS
1188 #ifndef LACKS_STRINGS_H
1189 #include <strings.h>     /* for ffs */
1190 #endif /* LACKS_STRINGS_H */
1191 #endif /* USE_BUILTIN_FFS */
1192 #if HAVE_MMAP
1193 #ifndef LACKS_SYS_MMAN_H
1194 #include <sys/mman.h>    /* for mmap */
1195 #endif /* LACKS_SYS_MMAN_H */
1196 #ifndef LACKS_FCNTL_H
1197 #include <fcntl.h>
1198 #endif /* LACKS_FCNTL_H */
1199 #endif /* HAVE_MMAP */
1200 #if HAVE_MORECORE
1201 #ifndef LACKS_UNISTD_H
1202 #include <unistd.h>     /* for sbrk */
1203 #else /* LACKS_UNISTD_H */
1204 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1205 extern void*     sbrk(ptrdiff_t);
1206 #endif /* FreeBSD etc */
1207 #endif /* LACKS_UNISTD_H */
1208 #endif /* HAVE_MMAP */
1209 
1210 #ifndef WIN32
1211 #ifndef malloc_getpagesize
1212 #  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
1213 #    ifndef _SC_PAGE_SIZE
1214 #      define _SC_PAGE_SIZE _SC_PAGESIZE
1215 #    endif
1216 #  endif
1217 #  ifdef _SC_PAGE_SIZE
1218 #    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1219 #  else
1220 #    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1221        extern size_t getpagesize();
1222 #      define malloc_getpagesize getpagesize()
1223 #    else
1224 #      ifdef WIN32 /* use supplied emulation of getpagesize */
1225 #        define malloc_getpagesize getpagesize()
1226 #      else
1227 #        ifndef LACKS_SYS_PARAM_H
1228 #          include <sys/param.h>
1229 #        endif
1230 #        ifdef EXEC_PAGESIZE
1231 #          define malloc_getpagesize EXEC_PAGESIZE
1232 #        else
1233 #          ifdef NBPG
1234 #            ifndef CLSIZE
1235 #              define malloc_getpagesize NBPG
1236 #            else
1237 #              define malloc_getpagesize (NBPG * CLSIZE)
1238 #            endif
1239 #          else
1240 #            ifdef NBPC
1241 #              define malloc_getpagesize NBPC
1242 #            else
1243 #              ifdef PAGESIZE
1244 #                define malloc_getpagesize PAGESIZE
1245 #              else /* just guess */
1246 #                define malloc_getpagesize ((size_t)4096U)
1247 #              endif
1248 #            endif
1249 #          endif
1250 #        endif
1251 #      endif
1252 #    endif
1253 #  endif
1254 #endif
1255 #endif
1256 
1257 /* ------------------- size_t and alignment properties -------------------- */
1258 
1259 #if 1
1260 #include "dlmalloc.h"
1261 #else
1262 // moved out to dlmalloc.h
1263 /* The byte and bit size of a size_t */
1264 #define SIZE_T_SIZE         (sizeof(size_t))
1265 #define SIZE_T_BITSIZE      (sizeof(size_t) << 3)
1266 
1267 /* Some constants coerced to size_t */
1268 /* Annoying but necessary to avoid errors on some plaftorms */
1269 #define SIZE_T_ZERO         ((size_t)0)
1270 #define SIZE_T_ONE          ((size_t)1)
1271 #define SIZE_T_TWO          ((size_t)2)
1272 #define TWO_SIZE_T_SIZES    (SIZE_T_SIZE<<1)
1273 #define FOUR_SIZE_T_SIZES   (SIZE_T_SIZE<<2)
1274 #define SIX_SIZE_T_SIZES    (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1275 #define HALF_MAX_SIZE_T     (MAX_SIZE_T / 2U)
1276 
1277 /* The bit mask value corresponding to MALLOC_ALIGNMENT */
1278 #define CHUNK_ALIGN_MASK    (MALLOC_ALIGNMENT - SIZE_T_ONE)
1279 
1280 /* True if address a has acceptable alignment */
1281 #define is_aligned(A)       (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1282 
1283 /* the number of bytes to offset an address to align it */
1284 #define align_offset(A)\
1285  ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1286   ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1287 
1288 #endif  // moved out to dlmalloc.h
1289 
1290 /* -------------------------- MMAP preliminaries ------------------------- */
1291 
1292 /*
1293    If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1294    checks to fail so compiler optimizer can delete code rather than
1295    using so many "#if"s.
1296 */
1297 
1298 
1299 /* MORECORE and MMAP must return MFAIL on failure */
1300 #define MFAIL                ((void*)(MAX_SIZE_T))
1301 #define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
1302 
1303 #if !HAVE_MMAP
1304 #define IS_MMAPPED_BIT       (SIZE_T_ZERO)
1305 #define USE_MMAP_BIT         (SIZE_T_ZERO)
1306 #define CALL_MMAP(s)         MFAIL
1307 #define CALL_MUNMAP(a, s)    (-1)
1308 #define DIRECT_MMAP(s)       MFAIL
1309 
1310 #else /* HAVE_MMAP */
1311 #define IS_MMAPPED_BIT       (SIZE_T_ONE)
1312 #define USE_MMAP_BIT         (SIZE_T_ONE)
1313 
1314 #ifndef WIN32
1315 #define CALL_MUNMAP(a, s)    munmap((a), (s))
1316 #define MMAP_PROT            (PROT_READ|PROT_WRITE)
1317 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1318 #define MAP_ANONYMOUS        MAP_ANON
1319 #endif /* MAP_ANON */
1320 #ifdef MAP_ANONYMOUS
1321 #define MMAP_FLAGS           (MAP_PRIVATE|MAP_ANONYMOUS)
1322 #define CALL_MMAP(s)         mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1323 #else /* MAP_ANONYMOUS */
1324 /*
1325    Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1326    is unlikely to be needed, but is supplied just in case.
1327 */
1328 #define MMAP_FLAGS           (MAP_PRIVATE)
1329 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1330 #define CALL_MMAP(s) ((dev_zero_fd < 0) ? \
1331            (dev_zero_fd = open("/dev/zero", O_RDWR), \
1332             mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1333             mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1334 #endif /* MAP_ANONYMOUS */
1335 
1336 #define DIRECT_MMAP(s)       CALL_MMAP(s)
1337 #else /* WIN32 */
1338 
1339 /* Win32 MMAP via VirtualAlloc */
win32mmap(size_t size)1340 static void* win32mmap(size_t size) {
1341   void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
1342   return (ptr != 0)? ptr: MFAIL;
1343 }
1344 
1345 /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
win32direct_mmap(size_t size)1346 static void* win32direct_mmap(size_t size) {
1347   void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
1348                            PAGE_READWRITE);
1349   return (ptr != 0)? ptr: MFAIL;
1350 }
1351 
1352 /* This function supports releasing coalesed segments */
win32munmap(void * ptr,size_t size)1353 static int win32munmap(void* ptr, size_t size) {
1354   MEMORY_BASIC_INFORMATION minfo;
1355   char* cptr = ptr;
1356   while (size) {
1357     if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
1358       return -1;
1359     if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
1360         minfo.State != MEM_COMMIT || minfo.RegionSize > size)
1361       return -1;
1362     if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
1363       return -1;
1364     cptr += minfo.RegionSize;
1365     size -= minfo.RegionSize;
1366   }
1367   return 0;
1368 }
1369 
1370 #define CALL_MMAP(s)         win32mmap(s)
1371 #define CALL_MUNMAP(a, s)    win32munmap((a), (s))
1372 #define DIRECT_MMAP(s)       win32direct_mmap(s)
1373 #endif /* WIN32 */
1374 #endif /* HAVE_MMAP */
1375 
1376 #if HAVE_MMAP && HAVE_MREMAP
1377 #define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1378 #else  /* HAVE_MMAP && HAVE_MREMAP */
1379 #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1380 #endif /* HAVE_MMAP && HAVE_MREMAP */
1381 
1382 #if HAVE_MORECORE
1383 #define CALL_MORECORE(S)     MORECORE(S)
1384 #else  /* HAVE_MORECORE */
1385 #define CALL_MORECORE(S)     MFAIL
1386 #endif /* HAVE_MORECORE */
1387 
1388 /* mstate bit set if continguous morecore disabled or failed */
1389 #define USE_NONCONTIGUOUS_BIT (4U)
1390 
1391 /* segment bit set in create_mspace_with_base */
1392 #define EXTERN_BIT            (8U)
1393 
1394 
1395 /* --------------------------- Lock preliminaries ------------------------ */
1396 
1397 #if USE_LOCKS
1398 
1399 /*
1400   When locks are defined, there are up to two global locks:
1401 
1402   * If HAVE_MORECORE, morecore_mutex protects sequences of calls to
1403     MORECORE.  In many cases sys_alloc requires two calls, that should
1404     not be interleaved with calls by other threads.  This does not
1405     protect against direct calls to MORECORE by other threads not
1406     using this lock, so there is still code to cope the best we can on
1407     interference.
1408 
1409   * magic_init_mutex ensures that mparams.magic and other
1410     unique mparams values are initialized only once.
1411 */
1412 
1413 #ifndef WIN32
1414 /* By default use posix locks */
1415 #if 0
1416 #include <pthread.h>
1417 #define MLOCK_T pthread_mutex_t
1418 #define INITIAL_LOCK(l)      pthread_mutex_init(l, NULL)
1419 #define ACQUIRE_LOCK(l)      pthread_mutex_lock(l)
1420 #define RELEASE_LOCK(l)      pthread_mutex_unlock(l)
1421 #endif
1422 
1423 #define ENOMEM               12
1424 
1425 #include "ucos.h"
1426 
1427 #define INT8U                unsigned char
1428 
1429 #define abort()              *((int*)0) = 0
1430 #define MLOCK_T              OS_EVENT*
1431 #define INITIAL_LOCK(l)      ((NULL == ((*(l)) = OSSemCreate(1)))? -1 : 0)
1432 #define ACQUIRE_LOCK(l)      ({ INT8U u8Err; OSSemPend((*(l)), 0, &u8Err); u8Err; })
1433 #define RELEASE_LOCK(l)      ({OSSemPost((*(l))); (0); })
1434 
1435 #if HAVE_MORECORE
1436 // static MLOCK_T morecore_mutex = PTHREAD_MUTEX_INITIALIZER;
1437 static MLOCK_T morecore_mutex = NULL;
1438 #endif /* HAVE_MORECORE */
1439 
1440 // static MLOCK_T magic_init_mutex = PTHREAD_MUTEX_INITIALIZER;
1441 static MLOCK_T magic_init_mutex = NULL;
1442 
dlinit(void)1443 void dlinit(void)
1444 {
1445 #if HAVE_MORECORE
1446     INITIAL_LOCK(&morecore_mutex);
1447 #endif
1448     INITIAL_LOCK(&magic_init_mutex);
1449 }
1450 
1451 #else /* WIN32 */
1452 /*
1453    Because lock-protected regions have bounded times, and there
1454    are no recursive lock calls, we can use simple spinlocks.
1455 */
1456 
1457 #define MLOCK_T long
win32_acquire_lock(MLOCK_T * sl)1458 static int win32_acquire_lock (MLOCK_T *sl) {
1459   for (;;) {
1460 #ifdef InterlockedCompareExchangePointer
1461     if (!InterlockedCompareExchange(sl, 1, 0))
1462       return 0;
1463 #else  /* Use older void* version */
1464     if (!InterlockedCompareExchange((void**)sl, (void*)1, (void*)0))
1465       return 0;
1466 #endif /* InterlockedCompareExchangePointer */
1467     Sleep (0);
1468   }
1469 }
1470 
win32_release_lock(MLOCK_T * sl)1471 static void win32_release_lock (MLOCK_T *sl) {
1472   InterlockedExchange (sl, 0);
1473 }
1474 
1475 #define INITIAL_LOCK(l)      *(l)=0
1476 #define ACQUIRE_LOCK(l)      win32_acquire_lock(l)
1477 #define RELEASE_LOCK(l)      win32_release_lock(l)
1478 #if HAVE_MORECORE
1479 static MLOCK_T morecore_mutex;
1480 #endif /* HAVE_MORECORE */
1481 static MLOCK_T magic_init_mutex;
1482 #endif /* WIN32 */
1483 
1484 #define USE_LOCK_BIT               (2U)
1485 #else  /* USE_LOCKS */
1486 #define USE_LOCK_BIT               (0U)
1487 #define INITIAL_LOCK(l)
1488 #endif /* USE_LOCKS */
1489 
1490 #if USE_LOCKS && HAVE_MORECORE
1491 #define ACQUIRE_MORECORE_LOCK()    ACQUIRE_LOCK(&morecore_mutex);
1492 #define RELEASE_MORECORE_LOCK()    RELEASE_LOCK(&morecore_mutex);
1493 #else /* USE_LOCKS && HAVE_MORECORE */
1494 #define ACQUIRE_MORECORE_LOCK()
1495 #define RELEASE_MORECORE_LOCK()
1496 #endif /* USE_LOCKS && HAVE_MORECORE */
1497 
1498 #if USE_LOCKS
1499 #define ACQUIRE_MAGIC_INIT_LOCK()  ACQUIRE_LOCK(&magic_init_mutex);
1500 #define RELEASE_MAGIC_INIT_LOCK()  RELEASE_LOCK(&magic_init_mutex);
1501 #else  /* USE_LOCKS */
1502 #define ACQUIRE_MAGIC_INIT_LOCK()
1503 #define RELEASE_MAGIC_INIT_LOCK()
1504 #endif /* USE_LOCKS */
1505 
1506 
1507 /* -----------------------  Chunk representations ------------------------ */
1508 
1509 /*
1510   (The following includes lightly edited explanations by Colin Plumb.)
1511 
1512   The malloc_chunk declaration below is misleading (but accurate and
1513   necessary).  It declares a "view" into memory allowing access to
1514   necessary fields at known offsets from a given base.
1515 
1516   Chunks of memory are maintained using a `boundary tag' method as
1517   originally described by Knuth.  (See the paper by Paul Wilson
1518   ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
1519   techniques.)  Sizes of free chunks are stored both in the front of
1520   each chunk and at the end.  This makes consolidating fragmented
1521   chunks into bigger chunks fast.  The head fields also hold bits
1522   representing whether chunks are free or in use.
1523 
1524   Here are some pictures to make it clearer.  They are "exploded" to
1525   show that the state of a chunk can be thought of as extending from
1526   the high 31 bits of the head field of its header through the
1527   prev_foot and PINUSE_BIT bit of the following chunk header.
1528 
1529   A chunk that's in use looks like:
1530 
1531    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1532            | Size of previous chunk (if P = 1)                             |
1533            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1534          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1535          | Size of this chunk                                         1| +-+
1536    mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1537          |                                                               |
1538          +-                                                             -+
1539          |                                                               |
1540          +-                                                             -+
1541          |                                                               :
1542          +-      size - sizeof(size_t) available payload bytes          -+
1543          :                                                               |
1544  chunk-> +-                                                             -+
1545          |                                                               |
1546          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1547        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
1548        | Size of next chunk (may or may not be in use)               | +-+
1549  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1550 
1551     And if it's free, it looks like this:
1552 
1553    chunk-> +-                                                             -+
1554            | User payload (must be in use, or we would have merged!)       |
1555            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1556          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1557          | Size of this chunk                                         0| +-+
1558    mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1559          | Next pointer                                                  |
1560          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1561          | Prev pointer                                                  |
1562          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1563          |                                                               :
1564          +-      size - sizeof(struct chunk) unused bytes               -+
1565          :                                                               |
1566  chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1567          | Size of this chunk                                            |
1568          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1569        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
1570        | Size of next chunk (must be in use, or we would have merged)| +-+
1571  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1572        |                                                               :
1573        +- User payload                                                -+
1574        :                                                               |
1575        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1576                                                                      |0|
1577                                                                      +-+
1578   Note that since we always merge adjacent free chunks, the chunks
1579   adjacent to a free chunk must be in use.
1580 
1581   Given a pointer to a chunk (which can be derived trivially from the
1582   payload pointer) we can, in O(1) time, find out whether the adjacent
1583   chunks are free, and if so, unlink them from the lists that they
1584   are on and merge them with the current chunk.
1585 
1586   Chunks always begin on even word boundaries, so the mem portion
1587   (which is returned to the user) is also on an even word boundary, and
1588   thus at least double-word aligned.
1589 
1590   The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
1591   chunk size (which is always a multiple of two words), is an in-use
1592   bit for the *previous* chunk.  If that bit is *clear*, then the
1593   word before the current chunk size contains the previous chunk
1594   size, and can be used to find the front of the previous chunk.
1595   The very first chunk allocated always has this bit set, preventing
1596   access to non-existent (or non-owned) memory. If pinuse is set for
1597   any given chunk, then you CANNOT determine the size of the
1598   previous chunk, and might even get a memory addressing fault when
1599   trying to do so.
1600 
1601   The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
1602   the chunk size redundantly records whether the current chunk is
1603   inuse. This redundancy enables usage checks within free and realloc,
1604   and reduces indirection when freeing and consolidating chunks.
1605 
1606   Each freshly allocated chunk must have both cinuse and pinuse set.
1607   That is, each allocated chunk borders either a previously allocated
1608   and still in-use chunk, or the base of its memory arena. This is
1609   ensured by making all allocations from the the `lowest' part of any
1610   found chunk.  Further, no free chunk physically borders another one,
1611   so each free chunk is known to be preceded and followed by either
1612   inuse chunks or the ends of memory.
1613 
1614   Note that the `foot' of the current chunk is actually represented
1615   as the prev_foot of the NEXT chunk. This makes it easier to
1616   deal with alignments etc but can be very confusing when trying
1617   to extend or adapt this code.
1618 
1619   The exceptions to all this are
1620 
1621      1. The special chunk `top' is the top-most available chunk (i.e.,
1622         the one bordering the end of available memory). It is treated
1623         specially.  Top is never included in any bin, is used only if
1624         no other chunk is available, and is released back to the
1625         system if it is very large (see M_TRIM_THRESHOLD).  In effect,
1626         the top chunk is treated as larger (and thus less well
1627         fitting) than any other available chunk.  The top chunk
1628         doesn't update its trailing size field since there is no next
1629         contiguous chunk that would have to index off it. However,
1630         space is still allocated for it (TOP_FOOT_SIZE) to enable
1631         separation or merging when space is extended.
1632 
1633      3. Chunks allocated via mmap, which have the lowest-order bit
1634         (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
1635         PINUSE_BIT in their head fields.  Because they are allocated
1636         one-by-one, each must carry its own prev_foot field, which is
1637         also used to hold the offset this chunk has within its mmapped
1638         region, which is needed to preserve alignment. Each mmapped
1639         chunk is trailed by the first two fields of a fake next-chunk
1640         for sake of usage checks.
1641 
1642 */
1643 
1644 #if 0 // 0 moved out to dlmalloc.h
1645 struct malloc_chunk {
1646   size_t               prev_foot;  /* Size of previous chunk (if free).  */
1647   size_t               head;       /* Size and inuse bits. */
1648   struct malloc_chunk* fd;         /* double links -- used only if free. */
1649   struct malloc_chunk* bk;
1650 };
1651 
1652 typedef struct malloc_chunk  mchunk;
1653 typedef struct malloc_chunk* mchunkptr;
1654 typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
1655 typedef unsigned int bindex_t;         /* Described below */
1656 typedef unsigned int binmap_t;         /* Described below */
1657 typedef unsigned int flag_t;           /* The type of various bit flag sets */
1658 
1659 /* ------------------- Chunks sizes and alignments ----------------------- */
1660 
1661 #define MCHUNK_SIZE         (sizeof(mchunk))
1662 
1663 #if FOOTERS
1664 #define CHUNK_OVERHEAD      (TWO_SIZE_T_SIZES)
1665 #else /* FOOTERS */
1666 #define CHUNK_OVERHEAD      (SIZE_T_SIZE)
1667 #endif /* FOOTERS */
1668 
1669 /* MMapped chunks need a second word of overhead ... */
1670 #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
1671 /* ... and additional padding for fake next-chunk at foot */
1672 #define MMAP_FOOT_PAD       (FOUR_SIZE_T_SIZES)
1673 
1674 /* The smallest size we can malloc is an aligned minimal chunk */
1675 #define MIN_CHUNK_SIZE\
1676   ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1677 
1678 /* conversion from malloc headers to user pointers, and back */
1679 #define chunk2mem(p)        ((void*)((char*)(p)       + TWO_SIZE_T_SIZES))
1680 #define mem2chunk(mem)      ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
1681 /* chunk associated with aligned address A */
1682 #define align_as_chunk(A)   (mchunkptr)((A) + align_offset(chunk2mem(A)))
1683 
1684 /* Bounds on request (not chunk) sizes. */
1685 #define MAX_REQUEST         ((-MIN_CHUNK_SIZE) << 2)
1686 #define MIN_REQUEST         (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
1687 
1688 /* pad request bytes into a usable size */
1689 #define pad_request(req) \
1690    (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1691 
1692 /* pad request, checking for minimum (but not maximum) */
1693 #define request2size(req) \
1694   (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
1695 
1696 
1697 /* ------------------ Operations on head and foot fields ----------------- */
1698 
1699 /*
1700   The head field of a chunk is or'ed with PINUSE_BIT when previous
1701   adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
1702   use. If the chunk was obtained with mmap, the prev_foot field has
1703   IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
1704   mmapped region to the base of the chunk.
1705 */
1706 
1707 #define PINUSE_BIT          (SIZE_T_ONE)
1708 #define CINUSE_BIT          (SIZE_T_TWO)
1709 #define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
1710 
1711 /* Head value for fenceposts */
1712 #define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
1713 
1714 /* extraction of fields from head words */
1715 #define cinuse(p)           ((p)->head & CINUSE_BIT)
1716 #define pinuse(p)           ((p)->head & PINUSE_BIT)
1717 #define chunksize(p)        ((p)->head & ~(INUSE_BITS))
1718 
1719 #define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
1720 #define clear_cinuse(p)     ((p)->head &= ~CINUSE_BIT)
1721 
1722 /* Treat space at ptr +/- offset as a chunk */
1723 #define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
1724 #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
1725 
1726 /* Ptr to next or previous physical malloc_chunk. */
1727 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
1728 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
1729 
1730 /* extract next chunk's pinuse bit */
1731 #define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
1732 
1733 /* Get/set size at footer */
1734 #define get_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot)
1735 #define set_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
1736 
1737 /* Set size, pinuse bit, and foot */
1738 #define set_size_and_pinuse_of_free_chunk(p, s)\
1739   ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
1740 
1741 /* Set size, pinuse bit, foot, and clear next pinuse */
1742 #define set_free_with_pinuse(p, s, n)\
1743   (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
1744 
1745 #define is_mmapped(p)\
1746   (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
1747 
1748 /* Get the internal overhead associated with chunk p */
1749 #define overhead_for(p)\
1750  (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
1751 
1752 /* Return true if malloced space is not necessarily cleared */
1753 #if MMAP_CLEARS
1754 #define calloc_must_clear(p) (!is_mmapped(p))
1755 #else /* MMAP_CLEARS */
1756 #define calloc_must_clear(p) (1)
1757 #endif /* MMAP_CLEARS */
1758 
1759 #endif  // 0 moved out to dlmalloc.h
1760 
1761 /* ---------------------- Overlaid data structures ----------------------- */
1762 
1763 /*
1764   When chunks are not in use, they are treated as nodes of either
1765   lists or trees.
1766 
1767   "Small"  chunks are stored in circular doubly-linked lists, and look
1768   like this:
1769 
1770     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1771             |             Size of previous chunk                            |
1772             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1773     `head:' |             Size of chunk, in bytes                         |P|
1774       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1775             |             Forward pointer to next chunk in list             |
1776             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1777             |             Back pointer to previous chunk in list            |
1778             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1779             |             Unused space (may be 0 bytes long)                .
1780             .                                                               .
1781             .                                                               |
1782 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1783     `foot:' |             Size of chunk, in bytes                           |
1784             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1785 
1786   Larger chunks are kept in a form of bitwise digital trees (aka
1787   tries) keyed on chunksizes.  Because malloc_tree_chunks are only for
1788   free chunks greater than 256 bytes, their size doesn't impose any
1789   constraints on user chunk sizes.  Each node looks like:
1790 
1791     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1792             |             Size of previous chunk                            |
1793             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1794     `head:' |             Size of chunk, in bytes                         |P|
1795       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1796             |             Forward pointer to next chunk of same size        |
1797             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1798             |             Back pointer to previous chunk of same size       |
1799             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1800             |             Pointer to left child (child[0])                  |
1801             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1802             |             Pointer to right child (child[1])                 |
1803             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1804             |             Pointer to parent                                 |
1805             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1806             |             bin index of this chunk                           |
1807             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1808             |             Unused space                                      .
1809             .                                                               |
1810 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1811     `foot:' |             Size of chunk, in bytes                           |
1812             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1813 
1814   Each tree holding treenodes is a tree of unique chunk sizes.  Chunks
1815   of the same size are arranged in a circularly-linked list, with only
1816   the oldest chunk (the next to be used, in our FIFO ordering)
1817   actually in the tree.  (Tree members are distinguished by a non-null
1818   parent pointer.)  If a chunk with the same size an an existing node
1819   is inserted, it is linked off the existing node using pointers that
1820   work in the same way as fd/bk pointers of small chunks.
1821 
1822   Each tree contains a power of 2 sized range of chunk sizes (the
1823   smallest is 0x100 <= x < 0x180), which is is divided in half at each
1824   tree level, with the chunks in the smaller half of the range (0x100
1825   <= x < 0x140 for the top nose) in the left subtree and the larger
1826   half (0x140 <= x < 0x180) in the right subtree.  This is, of course,
1827   done by inspecting individual bits.
1828 
1829   Using these rules, each node's left subtree contains all smaller
1830   sizes than its right subtree.  However, the node at the root of each
1831   subtree has no particular ordering relationship to either.  (The
1832   dividing line between the subtree sizes is based on trie relation.)
1833   If we remove the last chunk of a given size from the interior of the
1834   tree, we need to replace it with a leaf node.  The tree ordering
1835   rules permit a node to be replaced by any leaf below it.
1836 
1837   The smallest chunk in a tree (a common operation in a best-fit
1838   allocator) can be found by walking a path to the leftmost leaf in
1839   the tree.  Unlike a usual binary tree, where we follow left child
1840   pointers until we reach a null, here we follow the right child
1841   pointer any time the left one is null, until we reach a leaf with
1842   both child pointers null. The smallest chunk in the tree will be
1843   somewhere along that path.
1844 
1845   The worst case number of steps to add, find, or remove a node is
1846   bounded by the number of bits differentiating chunks within
1847   bins. Under current bin calculations, this ranges from 6 up to 21
1848   (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
1849   is of course much better.
1850 */
1851 
1852 struct malloc_tree_chunk {
1853   /* The first four fields must be compatible with malloc_chunk */
1854   size_t                    prev_foot;
1855   size_t                    head;
1856   struct malloc_tree_chunk* fd;
1857   struct malloc_tree_chunk* bk;
1858 
1859   struct malloc_tree_chunk* child[2];
1860   struct malloc_tree_chunk* parent;
1861   bindex_t                  index;
1862 };
1863 
1864 typedef struct malloc_tree_chunk  tchunk;
1865 typedef struct malloc_tree_chunk* tchunkptr;
1866 typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
1867 
1868 /* A little helper macro for trees */
1869 #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
1870 
1871 /* ----------------------------- Segments -------------------------------- */
1872 
1873 /*
1874   Each malloc space may include non-contiguous segments, held in a
1875   list headed by an embedded malloc_segment record representing the
1876   top-most space. Segments also include flags holding properties of
1877   the space. Large chunks that are directly allocated by mmap are not
1878   included in this list. They are instead independently created and
1879   destroyed without otherwise keeping track of them.
1880 
1881   Segment management mainly comes into play for spaces allocated by
1882   MMAP.  Any call to MMAP might or might not return memory that is
1883   adjacent to an existing segment.  MORECORE normally contiguously
1884   extends the current space, so this space is almost always adjacent,
1885   which is simpler and faster to deal with. (This is why MORECORE is
1886   used preferentially to MMAP when both are available -- see
1887   sys_alloc.)  When allocating using MMAP, we don't use any of the
1888   hinting mechanisms (inconsistently) supported in various
1889   implementations of unix mmap, or distinguish reserving from
1890   committing memory. Instead, we just ask for space, and exploit
1891   contiguity when we get it.  It is probably possible to do
1892   better than this on some systems, but no general scheme seems
1893   to be significantly better.
1894 
1895   Management entails a simpler variant of the consolidation scheme
1896   used for chunks to reduce fragmentation -- new adjacent memory is
1897   normally prepended or appended to an existing segment. However,
1898   there are limitations compared to chunk consolidation that mostly
1899   reflect the fact that segment processing is relatively infrequent
1900   (occurring only when getting memory from system) and that we
1901   don't expect to have huge numbers of segments:
1902 
1903   * Segments are not indexed, so traversal requires linear scans.  (It
1904     would be possible to index these, but is not worth the extra
1905     overhead and complexity for most programs on most platforms.)
1906   * New segments are only appended to old ones when holding top-most
1907     memory; if they cannot be prepended to others, they are held in
1908     different segments.
1909 
1910   Except for the top-most segment of an mstate, each segment record
1911   is kept at the tail of its segment. Segments are added by pushing
1912   segment records onto the list headed by &mstate.seg for the
1913   containing mstate.
1914 
1915   Segment flags control allocation/merge/deallocation policies:
1916   * If EXTERN_BIT set, then we did not allocate this segment,
1917     and so should not try to deallocate or merge with others.
1918     (This currently holds only for the initial segment passed
1919     into create_mspace_with_base.)
1920   * If IS_MMAPPED_BIT set, the segment may be merged with
1921     other surrounding mmapped segments and trimmed/de-allocated
1922     using munmap.
1923   * If neither bit is set, then the segment was obtained using
1924     MORECORE so can be merged with surrounding MORECORE'd segments
1925     and deallocated/trimmed using MORECORE with negative arguments.
1926 */
1927 
1928 struct malloc_segment {
1929   char*        base;             /* base address */
1930   size_t       size;             /* allocated size */
1931   struct malloc_segment* next;   /* ptr to next segment */
1932   flag_t       sflags;           /* mmap and extern flag */
1933 };
1934 
1935 #define is_mmapped_segment(S)  ((S)->sflags & IS_MMAPPED_BIT)
1936 #define is_extern_segment(S)   ((S)->sflags & EXTERN_BIT)
1937 
1938 typedef struct malloc_segment  msegment;
1939 typedef struct malloc_segment* msegmentptr;
1940 
1941 /* ---------------------------- malloc_state ----------------------------- */
1942 
1943 /*
1944    A malloc_state holds all of the bookkeeping for a space.
1945    The main fields are:
1946 
1947   Top
1948     The topmost chunk of the currently active segment. Its size is
1949     cached in topsize.  The actual size of topmost space is
1950     topsize+TOP_FOOT_SIZE, which includes space reserved for adding
1951     fenceposts and segment records if necessary when getting more
1952     space from the system.  The size at which to autotrim top is
1953     cached from mparams in trim_check, except that it is disabled if
1954     an autotrim fails.
1955 
1956   Designated victim (dv)
1957     This is the preferred chunk for servicing small requests that
1958     don't have exact fits.  It is normally the chunk split off most
1959     recently to service another small request.  Its size is cached in
1960     dvsize. The link fields of this chunk are not maintained since it
1961     is not kept in a bin.
1962 
1963   SmallBins
1964     An array of bin headers for free chunks.  These bins hold chunks
1965     with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
1966     chunks of all the same size, spaced 8 bytes apart.  To simplify
1967     use in double-linked lists, each bin header acts as a malloc_chunk
1968     pointing to the real first node, if it exists (else pointing to
1969     itself).  This avoids special-casing for headers.  But to avoid
1970     waste, we allocate only the fd/bk pointers of bins, and then use
1971     repositioning tricks to treat these as the fields of a chunk.
1972 
1973   TreeBins
1974     Treebins are pointers to the roots of trees holding a range of
1975     sizes. There are 2 equally spaced treebins for each power of two
1976     from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
1977     larger.
1978 
1979   Bin maps
1980     There is one bit map for small bins ("smallmap") and one for
1981     treebins ("treemap).  Each bin sets its bit when non-empty, and
1982     clears the bit when empty.  Bit operations are then used to avoid
1983     bin-by-bin searching -- nearly all "search" is done without ever
1984     looking at bins that won't be selected.  The bit maps
1985     conservatively use 32 bits per map word, even if on 64bit system.
1986     For a good description of some of the bit-based techniques used
1987     here, see Henry S. Warren Jr's book "Hacker's Delight" (and
1988     supplement at http://hackersdelight.org/). Many of these are
1989     intended to reduce the branchiness of paths through malloc etc, as
1990     well as to reduce the number of memory locations read or written.
1991 
1992   Segments
1993     A list of segments headed by an embedded malloc_segment record
1994     representing the initial space.
1995 
1996   Address check support
1997     The least_addr field is the least address ever obtained from
1998     MORECORE or MMAP. Attempted frees and reallocs of any address less
1999     than this are trapped (unless INSECURE is defined).
2000 
2001   Magic tag
2002     A cross-check field that should always hold same value as mparams.magic.
2003 
2004   Flags
2005     Bits recording whether to use MMAP, locks, or contiguous MORECORE
2006 
2007   Statistics
2008     Each space keeps track of current and maximum system memory
2009     obtained via MORECORE or MMAP.
2010 
2011   Locking
2012     If USE_LOCKS is defined, the "mutex" lock is acquired and released
2013     around every public call using this mspace.
2014 */
2015 
2016 /* Bin types, widths and sizes */
2017 #define NSMALLBINS        (32U)
2018 #define NTREEBINS         (32U)
2019 #define SMALLBIN_SHIFT    (3U)
2020 #define SMALLBIN_WIDTH    (SIZE_T_ONE << SMALLBIN_SHIFT)
2021 #define TREEBIN_SHIFT     (8U)
2022 #define MIN_LARGE_SIZE    (SIZE_T_ONE << TREEBIN_SHIFT)
2023 #define MAX_SMALL_SIZE    (MIN_LARGE_SIZE - SIZE_T_ONE)
2024 #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
2025 
2026 struct malloc_state {
2027   binmap_t   smallmap;
2028   binmap_t   treemap;
2029   size_t     dvsize;
2030   size_t     topsize;
2031   char*      least_addr;
2032   mchunkptr  dv;
2033   mchunkptr  top;
2034   size_t     trim_check;
2035   size_t     magic;
2036   mchunkptr  smallbins[(NSMALLBINS+1)*2];
2037   tbinptr    treebins[NTREEBINS];
2038   size_t     footprint;
2039   size_t     max_footprint;
2040   flag_t     mflags;
2041 #if USE_LOCKS
2042   MLOCK_T    mutex;     /* locate lock among fields that rarely change */
2043 #endif /* USE_LOCKS */
2044   msegment   seg;
2045 };
2046 
2047 typedef struct malloc_state*    mstate;
2048 
2049 /* ------------- Global malloc_state and malloc_params ------------------- */
2050 
2051 /*
2052   malloc_params holds global properties, including those that can be
2053   dynamically set using mallopt. There is a single instance, mparams,
2054   initialized in init_mparams.
2055 */
2056 
2057 struct malloc_params {
2058   size_t magic;
2059   size_t page_size;
2060   size_t granularity;
2061   size_t mmap_threshold;
2062   size_t trim_threshold;
2063   flag_t default_mflags;
2064 };
2065 
2066 static struct malloc_params mparams;
2067 
2068 /* The global malloc_state used for all non-"mspace" calls */
2069 static struct malloc_state _gm_;
2070 #define gm                 (&_gm_)
2071 #define is_global(M)       ((M) == &_gm_)
2072 #define is_initialized(M)  ((M)->top != 0)
2073 
2074 /* -------------------------- system alloc setup ------------------------- */
2075 
2076 /* Operations on mflags */
2077 
2078 #define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
2079 #define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
2080 #define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
2081 
2082 #define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
2083 #define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
2084 #define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
2085 
2086 #define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
2087 #define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
2088 
2089 #define set_lock(M,L)\
2090  ((M)->mflags = (L)?\
2091   ((M)->mflags | USE_LOCK_BIT) :\
2092   ((M)->mflags & ~USE_LOCK_BIT))
2093 
2094 /* page-align a size */
2095 #define page_align(S)\
2096  (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
2097 
2098 /* granularity-align a size */
2099 #define granularity_align(S)\
2100   (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
2101 
2102 #define is_page_aligned(S)\
2103    (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2104 #define is_granularity_aligned(S)\
2105    (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2106 
2107 /*  True if segment S holds address A */
2108 #define segment_holds(S, A)\
2109   ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2110 
2111 /* Return segment holding given address */
segment_holding(mstate m,char * addr)2112 static msegmentptr segment_holding(mstate m, char* addr) {
2113   msegmentptr sp = &m->seg;
2114   for (;;) {
2115     if (addr >= sp->base && addr < sp->base + sp->size)
2116       return sp;
2117     if ((sp = sp->next) == 0)
2118       return 0;
2119   }
2120 }
2121 
2122 /* Return true if segment contains a segment link */
has_segment_link(mstate m,msegmentptr ss)2123 static int has_segment_link(mstate m, msegmentptr ss) {
2124   msegmentptr sp = &m->seg;
2125   for (;;) {
2126     if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
2127       return 1;
2128     if ((sp = sp->next) == 0)
2129       return 0;
2130   }
2131 }
2132 
2133 #ifndef MORECORE_CANNOT_TRIM
2134 #define should_trim(M,s)  ((s) > (M)->trim_check)
2135 #else  /* MORECORE_CANNOT_TRIM */
2136 #define should_trim(M,s)  (0)
2137 #endif /* MORECORE_CANNOT_TRIM */
2138 
2139 /*
2140   TOP_FOOT_SIZE is padding at the end of a segment, including space
2141   that may be needed to place segment records and fenceposts when new
2142   noncontiguous segments are added.
2143 */
2144 #define TOP_FOOT_SIZE\
2145   (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2146 
2147 
2148 /* -------------------------------  Hooks -------------------------------- */
2149 
2150 /*
2151   PREACTION should be defined to return 0 on success, and nonzero on
2152   failure. If you are not using locking, you can redefine these to do
2153   anything you like.
2154 */
2155 
2156 #if USE_LOCKS
2157 
2158 /* Ensure locks are initialized */
2159 #define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
2160 
2161 #define PREACTION(M)  ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2162 #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2163 #else /* USE_LOCKS */
2164 
2165 #ifndef PREACTION
2166 #define PREACTION(M) (0)
2167 #endif  /* PREACTION */
2168 
2169 #ifndef POSTACTION
2170 #define POSTACTION(M)
2171 #endif  /* POSTACTION */
2172 
2173 #endif /* USE_LOCKS */
2174 
2175 /*
2176   CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2177   USAGE_ERROR_ACTION is triggered on detected bad frees and
2178   reallocs. The argument p is an address that might have triggered the
2179   fault. It is ignored by the two predefined actions, but might be
2180   useful in custom actions that try to help diagnose errors.
2181 */
2182 
2183 #if PROCEED_ON_ERROR
2184 
2185 /* A count of the number of corruption errors causing resets */
2186 int malloc_corruption_error_count;
2187 
2188 /* default corruption action */
2189 static void reset_on_error(mstate m);
2190 
2191 #define CORRUPTION_ERROR_ACTION(m)  reset_on_error(m)
2192 #define USAGE_ERROR_ACTION(m, p)
2193 
2194 #else /* PROCEED_ON_ERROR */
2195 
2196 #ifndef CORRUPTION_ERROR_ACTION
2197 #define CORRUPTION_ERROR_ACTION(m) ABORT
2198 #endif /* CORRUPTION_ERROR_ACTION */
2199 
2200 #ifndef USAGE_ERROR_ACTION
2201 #define USAGE_ERROR_ACTION(m,p) ABORT
2202 #endif /* USAGE_ERROR_ACTION */
2203 
2204 #endif /* PROCEED_ON_ERROR */
2205 
2206 /* -------------------------- Debugging setup ---------------------------- */
2207 
2208 #if ! DEBUG
2209 
2210 #define check_free_chunk(M,P)
2211 #define check_inuse_chunk(M,P)
2212 #define check_malloced_chunk(M,P,N)
2213 #define check_mmapped_chunk(M,P)
2214 #define check_malloc_state(M)
2215 #define check_top_chunk(M,P)
2216 
2217 #else /* DEBUG */
2218 #define check_free_chunk(M,P)       do_check_free_chunk(M,P)
2219 #define check_inuse_chunk(M,P)      do_check_inuse_chunk(M,P)
2220 #define check_top_chunk(M,P)        do_check_top_chunk(M,P)
2221 #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2222 #define check_mmapped_chunk(M,P)    do_check_mmapped_chunk(M,P)
2223 #define check_malloc_state(M)       do_check_malloc_state(M)
2224 
2225 static void   do_check_any_chunk(mstate m, mchunkptr p);
2226 static void   do_check_top_chunk(mstate m, mchunkptr p);
2227 static void   do_check_mmapped_chunk(mstate m, mchunkptr p);
2228 static void   do_check_inuse_chunk(mstate m, mchunkptr p);
2229 static void   do_check_free_chunk(mstate m, mchunkptr p);
2230 static void   do_check_malloced_chunk(mstate m, void* mem, size_t s);
2231 static void   do_check_tree(mstate m, tchunkptr t);
2232 static void   do_check_treebin(mstate m, bindex_t i);
2233 static void   do_check_smallbin(mstate m, bindex_t i);
2234 static void   do_check_malloc_state(mstate m);
2235 static int    bin_find(mstate m, mchunkptr x);
2236 static size_t traverse_and_check(mstate m);
2237 #endif /* DEBUG */
2238 
2239 /* ---------------------------- Indexing Bins ---------------------------- */
2240 
2241 #define is_small(s)         (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2242 #define small_index(s)      ((s)  >> SMALLBIN_SHIFT)
2243 #define small_index2size(i) ((i)  << SMALLBIN_SHIFT)
2244 #define MIN_SMALL_INDEX     (small_index(MIN_CHUNK_SIZE))
2245 
2246 /* addressing by index. See above about smallbin repositioning */
2247 #define smallbin_at(M, i)   ((sbinptr)((unsigned long)&((M)->smallbins[(i)<<1])))
2248 #define treebin_at(M,i)     (&((M)->treebins[i]))
2249 
2250 /* assign tree index for size S to variable I */
2251 #if defined(__GNUC__) && defined(i386)
2252 #define compute_tree_index(S, I)\
2253 {\
2254   size_t X = S >> TREEBIN_SHIFT;\
2255   if (X == 0)\
2256     I = 0;\
2257   else if (X > 0xFFFF)\
2258     I = NTREEBINS-1;\
2259   else {\
2260     unsigned int K;\
2261     __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm"  (X));\
2262     I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2263   }\
2264 }
2265 #else /* GNUC */
2266 #define compute_tree_index(S, I)\
2267 {\
2268   size_t X = S >> TREEBIN_SHIFT;\
2269   if (X == 0)\
2270     I = 0;\
2271   else if (X > 0xFFFF)\
2272     I = NTREEBINS-1;\
2273   else {\
2274     unsigned int Y = (unsigned int)X;\
2275     unsigned int N = ((Y - 0x100) >> 16) & 8;\
2276     unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2277     N += K;\
2278     N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2279     K = 14 - N + ((Y <<= K) >> 15);\
2280     I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2281   }\
2282 }
2283 #endif /* GNUC */
2284 
2285 /* Bit representing maximum resolved size in a treebin at i */
2286 #define bit_for_tree_index(i) \
2287    (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2288 
2289 /* Shift placing maximum resolved bit in a treebin at i as sign bit */
2290 #define leftshift_for_tree_index(i) \
2291    ((i == NTREEBINS-1)? 0 : \
2292     ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2293 
2294 /* The size of the smallest chunk held in bin with index i */
2295 #define minsize_for_tree_index(i) \
2296    ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) |  \
2297    (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2298 
2299 
2300 /* ------------------------ Operations on bin maps ----------------------- */
2301 
2302 /* bit corresponding to given index */
2303 #define idx2bit(i)              ((binmap_t)(1) << (i))
2304 
2305 /* Mark/Clear bits with given index */
2306 #define mark_smallmap(M,i)      ((M)->smallmap |=  idx2bit(i))
2307 #define clear_smallmap(M,i)     ((M)->smallmap &= ~idx2bit(i))
2308 #define smallmap_is_marked(M,i) ((M)->smallmap &   idx2bit(i))
2309 
2310 #define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
2311 #define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
2312 #define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
2313 
2314 /* index corresponding to given bit */
2315 
2316 #if defined(__GNUC__) && defined(i386)
2317 #define compute_bit2idx(X, I)\
2318 {\
2319   unsigned int J;\
2320   __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
2321   I = (bindex_t)J;\
2322 }
2323 
2324 #else /* GNUC */
2325 #if  USE_BUILTIN_FFS
2326 #define compute_bit2idx(X, I) I = ffs(X)-1
2327 
2328 #else /* USE_BUILTIN_FFS */
2329 #define compute_bit2idx(X, I)\
2330 {\
2331   unsigned int Y = X - 1;\
2332   unsigned int K = Y >> (16-4) & 16;\
2333   unsigned int N = K;        Y >>= K;\
2334   N += K = Y >> (8-3) &  8;  Y >>= K;\
2335   N += K = Y >> (4-2) &  4;  Y >>= K;\
2336   N += K = Y >> (2-1) &  2;  Y >>= K;\
2337   N += K = Y >> (1-0) &  1;  Y >>= K;\
2338   I = (bindex_t)(N + Y);\
2339 }
2340 #endif /* USE_BUILTIN_FFS */
2341 #endif /* GNUC */
2342 
2343 /* isolate the least set bit of a bitmap */
2344 #define least_bit(x)         ((x) & -(x))
2345 
2346 /* mask with all bits to left of least bit of x on */
2347 #define left_bits(x)         ((x<<1) | -(x<<1))
2348 
2349 /* mask with all bits to left of or equal to least bit of x on */
2350 #define same_or_left_bits(x) ((x) | -(x))
2351 
2352 
2353 /* ----------------------- Runtime Check Support ------------------------- */
2354 
2355 /*
2356   For security, the main invariant is that malloc/free/etc never
2357   writes to a static address other than malloc_state, unless static
2358   malloc_state itself has been corrupted, which cannot occur via
2359   malloc (because of these checks). In essence this means that we
2360   believe all pointers, sizes, maps etc held in malloc_state, but
2361   check all of those linked or offsetted from other embedded data
2362   structures.  These checks are interspersed with main code in a way
2363   that tends to minimize their run-time cost.
2364 
2365   When FOOTERS is defined, in addition to range checking, we also
2366   verify footer fields of inuse chunks, which can be used guarantee
2367   that the mstate controlling malloc/free is intact.  This is a
2368   streamlined version of the approach described by William Robertson
2369   et al in "Run-time Detection of Heap-based Overflows" LISA'03
2370   http://www.usenix.org/events/lisa03/tech/robertson.html The footer
2371   of an inuse chunk holds the xor of its mstate and a random seed,
2372   that is checked upon calls to free() and realloc().  This is
2373   (probablistically) unguessable from outside the program, but can be
2374   computed by any code successfully malloc'ing any chunk, so does not
2375   itself provide protection against code that has already broken
2376   security through some other means.  Unlike Robertson et al, we
2377   always dynamically check addresses of all offset chunks (previous,
2378   next, etc). This turns out to be cheaper than relying on hashes.
2379 */
2380 
2381 #if !INSECURE
2382 /* Check if address a is at least as high as any from MORECORE or MMAP */
2383 #define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
2384 /* Check if address of next chunk n is higher than base chunk p */
2385 #define ok_next(p, n)    ((char*)(p) < (char*)(n))
2386 /* Check if p has its cinuse bit on */
2387 #define ok_cinuse(p)     cinuse(p)
2388 /* Check if p has its pinuse bit on */
2389 #define ok_pinuse(p)     pinuse(p)
2390 
2391 #else /* !INSECURE */
2392 #define ok_address(M, a) (1)
2393 #define ok_next(b, n)    (1)
2394 #define ok_cinuse(p)     (1)
2395 #define ok_pinuse(p)     (1)
2396 #endif /* !INSECURE */
2397 
2398 #if (FOOTERS && !INSECURE)
2399 /* Check if (alleged) mstate m has expected magic field */
2400 #define ok_magic(M)      ((M)->magic == mparams.magic)
2401 #else  /* (FOOTERS && !INSECURE) */
2402 #define ok_magic(M)      (1)
2403 #endif /* (FOOTERS && !INSECURE) */
2404 
2405 
2406 /* In gcc, use __builtin_expect to minimize impact of checks */
2407 #if !INSECURE
2408 #if defined(__GNUC__) && __GNUC__ >= 3
2409 #define RTCHECK(e)  __builtin_expect(e, 1)
2410 #else /* GNUC */
2411 #define RTCHECK(e)  (e)
2412 #endif /* GNUC */
2413 #else /* !INSECURE */
2414 #define RTCHECK(e)  (1)
2415 #endif /* !INSECURE */
2416 
2417 /* macros to set up inuse chunks with or without footers */
2418 
2419 #if !FOOTERS
2420 
2421 #define mark_inuse_foot(M,p,s)
2422 
2423 /* Set cinuse bit and pinuse bit of next chunk */
2424 #define set_inuse(M,p,s)\
2425   ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2426   ((mchunkptr)(((unsigned long)(p)) + (s)))->head |= PINUSE_BIT)
2427 
2428 /* Set cinuse and pinuse of this chunk and pinuse of next chunk */
2429 #define set_inuse_and_pinuse(M,p,s)\
2430   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2431   ((mchunkptr)(((unsigned long)(p)) + (s)))->head |= PINUSE_BIT)
2432 
2433 /* Set size, cinuse and pinuse bit of this chunk */
2434 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2435   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
2436 
2437 #else /* FOOTERS */
2438 
2439 /* Set foot of inuse chunk to be xor of mstate and seed */
2440 #define mark_inuse_foot(M,p,s)\
2441   (((mchunkptr)((unsigned long)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
2442 
2443 #define get_mstate_for(p)\
2444   ((mstate)(((mchunkptr)((unsigned long)(p) +\
2445     (chunksize(p))))->prev_foot ^ mparams.magic))
2446 
2447 #define set_inuse(M,p,s)\
2448   ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2449   (((mchunkptr)(((unsigned long)(p)) + (s)))->head |= PINUSE_BIT), \
2450   mark_inuse_foot(M,p,s))
2451 
2452 #define set_inuse_and_pinuse(M,p,s)\
2453   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2454   (((mchunkptr)(((unsigned long)(p)) + (s)))->head |= PINUSE_BIT),\
2455  mark_inuse_foot(M,p,s))
2456 
2457 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2458   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2459   mark_inuse_foot(M, p, s))
2460 
2461 #endif /* !FOOTERS */
2462 
2463 /* ---------------------------- setting mparams -------------------------- */
2464 
2465 /* Initialize mparams */
init_mparams(void)2466 static int init_mparams(void) {
2467   if (mparams.page_size == 0) {
2468     size_t s;
2469 
2470     mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
2471     mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
2472 #if MORECORE_CONTIGUOUS
2473     mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
2474 #else  /* MORECORE_CONTIGUOUS */
2475     mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
2476 #endif /* MORECORE_CONTIGUOUS */
2477 
2478 #if (FOOTERS && !INSECURE)
2479     {
2480 #if USE_DEV_RANDOM
2481       int fd;
2482       unsigned char buf[sizeof(size_t)];
2483       /* Try to use /dev/urandom, else fall back on using time */
2484       if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
2485           read(fd, buf, sizeof(buf)) == sizeof(buf)) {
2486         s = *((size_t *) buf);
2487         close(fd);
2488       }
2489       else
2490 #endif /* USE_DEV_RANDOM */
2491         s = (size_t)(time(0) ^ (size_t)0x55555555U);
2492 
2493       s |= (size_t)8U;    /* ensure nonzero */
2494       s &= ~(size_t)7U;   /* improve chances of fault for bad values */
2495 
2496     }
2497 #else /* (FOOTERS && !INSECURE) */
2498     s = (size_t)0x58585858U;
2499 #endif /* (FOOTERS && !INSECURE) */
2500     {
2501         int ret_value=ACQUIRE_MAGIC_INIT_LOCK();
2502         if(ret_value)printf("%s.%d lock error\n",__FUNCTION__,__LINE__);
2503     }
2504     if (mparams.magic == 0) {
2505       mparams.magic = s;
2506       /* Set up lock for main malloc area */
2507     {
2508         int ret_value=INITIAL_LOCK(&gm->mutex);
2509         if(ret_value)printf("%s.%d lock error\n",__FUNCTION__,__LINE__);
2510     }
2511       gm->mflags = mparams.default_mflags;
2512     }
2513     RELEASE_MAGIC_INIT_LOCK();
2514 
2515 #ifndef WIN32
2516     mparams.page_size = 4096;    //malloc_getpagesize;
2517     mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
2518                            DEFAULT_GRANULARITY : mparams.page_size);
2519 #else /* WIN32 */
2520     {
2521       SYSTEM_INFO system_info;
2522       GetSystemInfo(&system_info);
2523       mparams.page_size = system_info.dwPageSize;
2524       mparams.granularity = system_info.dwAllocationGranularity;
2525     }
2526 #endif /* WIN32 */
2527 
2528     /* Sanity-check configuration:
2529        size_t must be unsigned and as wide as pointer type.
2530        ints must be at least 4 bytes.
2531        alignment must be at least 8.
2532        Alignment, min chunk size, and page size must all be powers of 2.
2533     */
2534     if ((sizeof(size_t) != sizeof(char*)) ||
2535         (MAX_SIZE_T < MIN_CHUNK_SIZE)  ||
2536         (sizeof(int) < 4)  ||
2537         (MALLOC_ALIGNMENT < (size_t)8U) ||
2538         ((MALLOC_ALIGNMENT    & (MALLOC_ALIGNMENT-SIZE_T_ONE))    != 0) ||
2539         ((MCHUNK_SIZE         & (MCHUNK_SIZE-SIZE_T_ONE))         != 0) ||
2540         ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
2541         ((mparams.page_size   & (mparams.page_size-SIZE_T_ONE))   != 0))
2542 	{
2543         // @FIXME: Richard remove this to avoid link to stardard C library (libc.a)
2544       //ABORT;
2545 	}
2546   }
2547   return 0;
2548 }
2549 
2550 /* support for mallopt */
change_mparam(int param_number,int value)2551 static int change_mparam(int param_number, int value) {
2552   size_t val = (size_t)value;
2553   init_mparams();
2554   switch(param_number) {
2555   case M_TRIM_THRESHOLD:
2556     mparams.trim_threshold = val;
2557     return 1;
2558   case M_GRANULARITY:
2559     if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
2560       mparams.granularity = val;
2561       return 1;
2562     }
2563     else
2564       return 0;
2565   case M_MMAP_THRESHOLD:
2566     mparams.mmap_threshold = val;
2567     return 1;
2568   default:
2569     return 0;
2570   }
2571 }
2572 
2573 #if DEBUG
2574 /* ------------------------- Debugging Support --------------------------- */
2575 
2576 /* Check properties of any chunk, whether free, inuse, mmapped etc  */
do_check_any_chunk(mstate m,mchunkptr p)2577 static void do_check_any_chunk(mstate m, mchunkptr p) {
2578   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2579   assert(ok_address(m, p));
2580 }
2581 
2582 /* Check properties of top chunk */
do_check_top_chunk(mstate m,mchunkptr p)2583 static void do_check_top_chunk(mstate m, mchunkptr p) {
2584   msegmentptr sp = segment_holding(m, (char*)p);
2585   size_t  sz = chunksize(p);
2586   assert(sp != 0);
2587   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2588   assert(ok_address(m, p));
2589   assert(sz == m->topsize);
2590   assert(sz > 0);
2591   assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
2592   assert(pinuse(p));
2593   assert(!next_pinuse(p));
2594 }
2595 
2596 /* Check properties of (inuse) mmapped chunks */
do_check_mmapped_chunk(mstate m,mchunkptr p)2597 static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
2598   size_t  sz = chunksize(p);
2599   size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
2600   assert(is_mmapped(p));
2601   assert(use_mmap(m));
2602   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2603   assert(ok_address(m, p));
2604   assert(!is_small(sz));
2605   assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
2606   assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
2607   assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
2608 }
2609 
2610 /* Check properties of inuse chunks */
do_check_inuse_chunk(mstate m,mchunkptr p)2611 static void do_check_inuse_chunk(mstate m, mchunkptr p) {
2612   do_check_any_chunk(m, p);
2613   assert(cinuse(p));
2614   assert(next_pinuse(p));
2615   /* If not pinuse and not mmapped, previous chunk has OK offset */
2616   assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
2617   if (is_mmapped(p))
2618     do_check_mmapped_chunk(m, p);
2619 }
2620 
2621 /* Check properties of free chunks */
do_check_free_chunk(mstate m,mchunkptr p)2622 static void do_check_free_chunk(mstate m, mchunkptr p) {
2623   size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
2624   mchunkptr next = chunk_plus_offset(p, sz);
2625   do_check_any_chunk(m, p);
2626   assert(!cinuse(p));
2627   assert(!next_pinuse(p));
2628   assert (!is_mmapped(p));
2629   if (p != m->dv && p != m->top) {
2630     if (sz >= MIN_CHUNK_SIZE) {
2631       assert((sz & CHUNK_ALIGN_MASK) == 0);
2632       assert(is_aligned(chunk2mem(p)));
2633       assert(next->prev_foot == sz);
2634       assert(pinuse(p));
2635       assert (next == m->top || cinuse(next));
2636       assert(p->fd->bk == p);
2637       assert(p->bk->fd == p);
2638     }
2639     else  /* markers are always of size SIZE_T_SIZE */
2640       assert(sz == SIZE_T_SIZE);
2641   }
2642 }
2643 
2644 /* Check properties of malloced chunks at the point they are malloced */
do_check_malloced_chunk(mstate m,void * mem,size_t s)2645 static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
2646   if (mem != 0) {
2647     mchunkptr p = mem2chunk(mem);
2648     size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
2649     do_check_inuse_chunk(m, p);
2650     assert((sz & CHUNK_ALIGN_MASK) == 0);
2651     assert(sz >= MIN_CHUNK_SIZE);
2652     assert(sz >= s);
2653     /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
2654     assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
2655   }
2656 }
2657 
2658 /* Check a tree and its subtrees.  */
do_check_tree(mstate m,tchunkptr t)2659 static void do_check_tree(mstate m, tchunkptr t) {
2660   tchunkptr head = 0;
2661   tchunkptr u = t;
2662   bindex_t tindex = t->index;
2663   size_t tsize = chunksize(t);
2664   bindex_t idx;
2665   compute_tree_index(tsize, idx);
2666   assert(tindex == idx);
2667   assert(tsize >= MIN_LARGE_SIZE);
2668   assert(tsize >= minsize_for_tree_index(idx));
2669   assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
2670 
2671   do { /* traverse through chain of same-sized nodes */
2672     do_check_any_chunk(m, ((mchunkptr)u));
2673     assert(u->index == tindex);
2674     assert(chunksize(u) == tsize);
2675     assert(!cinuse(u));
2676     assert(!next_pinuse(u));
2677     assert(u->fd->bk == u);
2678     assert(u->bk->fd == u);
2679     if (u->parent == 0) {
2680       assert(u->child[0] == 0);
2681       assert(u->child[1] == 0);
2682     }
2683     else {
2684       assert(head == 0); /* only one node on chain has parent */
2685       head = u;
2686       assert(u->parent != u);
2687       assert (u->parent->child[0] == u ||
2688               u->parent->child[1] == u ||
2689               *((tbinptr*)(u->parent)) == u);
2690       if (u->child[0] != 0) {
2691         assert(u->child[0]->parent == u);
2692         assert(u->child[0] != u);
2693         do_check_tree(m, u->child[0]);
2694       }
2695       if (u->child[1] != 0) {
2696         assert(u->child[1]->parent == u);
2697         assert(u->child[1] != u);
2698         do_check_tree(m, u->child[1]);
2699       }
2700       if (u->child[0] != 0 && u->child[1] != 0) {
2701         assert(chunksize(u->child[0]) < chunksize(u->child[1]));
2702       }
2703     }
2704     u = u->fd;
2705   } while (u != t);
2706   assert(head != 0);
2707 }
2708 
2709 /*  Check all the chunks in a treebin.  */
do_check_treebin(mstate m,bindex_t i)2710 static void do_check_treebin(mstate m, bindex_t i) {
2711   tbinptr* tb = treebin_at(m, i);
2712   tchunkptr t = *tb;
2713   int empty = (m->treemap & (1U << i)) == 0;
2714   if (t == 0)
2715     assert(empty);
2716   if (!empty)
2717     do_check_tree(m, t);
2718 }
2719 
2720 /*  Check all the chunks in a smallbin.  */
do_check_smallbin(mstate m,bindex_t i)2721 static void do_check_smallbin(mstate m, bindex_t i) {
2722   sbinptr b = smallbin_at(m, i);
2723   mchunkptr p = b->bk;
2724   unsigned int empty = (m->smallmap & (1U << i)) == 0;
2725   if (p == b)
2726     assert(empty);
2727   if (!empty) {
2728     for (; p != b; p = p->bk) {
2729       size_t size = chunksize(p);
2730       mchunkptr q;
2731       /* each chunk claims to be free */
2732       do_check_free_chunk(m, p);
2733       /* chunk belongs in bin */
2734       assert(small_index(size) == i);
2735       assert(p->bk == b || chunksize(p->bk) == chunksize(p));
2736       /* chunk is followed by an inuse chunk */
2737       q = next_chunk(p);
2738       if (q->head != FENCEPOST_HEAD)
2739         do_check_inuse_chunk(m, q);
2740     }
2741   }
2742 }
2743 
2744 /* Find x in a bin. Used in other check functions. */
bin_find(mstate m,mchunkptr x)2745 static int bin_find(mstate m, mchunkptr x) {
2746   size_t size = chunksize(x);
2747   if (is_small(size)) {
2748     bindex_t sidx = small_index(size);
2749     sbinptr b = smallbin_at(m, sidx);
2750     if (smallmap_is_marked(m, sidx)) {
2751       mchunkptr p = b;
2752       do {
2753         if (p == x)
2754           return 1;
2755       } while ((p = p->fd) != b);
2756     }
2757   }
2758   else {
2759     bindex_t tidx;
2760     compute_tree_index(size, tidx);
2761     if (treemap_is_marked(m, tidx)) {
2762       tchunkptr t = *treebin_at(m, tidx);
2763       size_t sizebits = size << leftshift_for_tree_index(tidx);
2764       while (t != 0 && chunksize(t) != size) {
2765         t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
2766         sizebits <<= 1;
2767       }
2768       if (t != 0) {
2769         tchunkptr u = t;
2770         do {
2771           if (u == (tchunkptr)x)
2772             return 1;
2773         } while ((u = u->fd) != t);
2774       }
2775     }
2776   }
2777   return 0;
2778 }
2779 
2780 /* Traverse each chunk and check it; return total */
traverse_and_check(mstate m)2781 static size_t traverse_and_check(mstate m) {
2782   size_t sum = 0;
2783   if (is_initialized(m)) {
2784     msegmentptr s = &m->seg;
2785     sum += m->topsize + TOP_FOOT_SIZE;
2786     while (s != 0) {
2787       mchunkptr q = align_as_chunk(s->base);
2788       mchunkptr lastq = 0;
2789       assert(pinuse(q));
2790       while (segment_holds(s, q) &&
2791              q != m->top && q->head != FENCEPOST_HEAD) {
2792         sum += chunksize(q);
2793         if (cinuse(q)) {
2794           assert(!bin_find(m, q));
2795           do_check_inuse_chunk(m, q);
2796         }
2797         else {
2798           assert(q == m->dv || bin_find(m, q));
2799           assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
2800           do_check_free_chunk(m, q);
2801         }
2802         lastq = q;
2803         q = next_chunk(q);
2804       }
2805       s = s->next;
2806     }
2807   }
2808   return sum;
2809 }
2810 
2811 /* Check all properties of malloc_state. */
do_check_malloc_state(mstate m)2812 static void do_check_malloc_state(mstate m) {
2813   bindex_t i;
2814   size_t total;
2815   /* check bins */
2816   for (i = 0; i < NSMALLBINS; ++i)
2817     do_check_smallbin(m, i);
2818   for (i = 0; i < NTREEBINS; ++i)
2819     do_check_treebin(m, i);
2820 
2821   if (m->dvsize != 0) { /* check dv chunk */
2822     do_check_any_chunk(m, m->dv);
2823     assert(m->dvsize == chunksize(m->dv));
2824     assert(m->dvsize >= MIN_CHUNK_SIZE);
2825     assert(bin_find(m, m->dv) == 0);
2826   }
2827 
2828   if (m->top != 0) {   /* check top chunk */
2829     do_check_top_chunk(m, m->top);
2830     assert(m->topsize == chunksize(m->top));
2831     assert(m->topsize > 0);
2832     assert(bin_find(m, m->top) == 0);
2833   }
2834 
2835   total = traverse_and_check(m);
2836   assert(total <= m->footprint);
2837   assert(m->footprint <= m->max_footprint);
2838 }
2839 #endif /* DEBUG */
2840 
2841 /* ----------------------------- statistics ------------------------------ */
2842 
2843 #if !NO_MALLINFO
internal_mallinfo(mstate m,size_t * plargest_free)2844 static struct mallinfo internal_mallinfo(mstate m, size_t *plargest_free) {
2845   struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
2846   if (!PREACTION(m)) {
2847     check_malloc_state(m);
2848     if (is_initialized(m)) {
2849       size_t nfree = SIZE_T_ONE; /* top always free */
2850       size_t mfree = m->topsize + TOP_FOOT_SIZE;
2851       size_t sum = mfree;
2852       *plargest_free = mfree;
2853       msegmentptr s = &m->seg;
2854       while (s != 0) {
2855         mchunkptr q = align_as_chunk(s->base);
2856         while (segment_holds(s, q) &&
2857                q != m->top && q->head != FENCEPOST_HEAD) {
2858           size_t sz = chunksize(q);
2859           sum += sz;
2860           if (!cinuse(q)) {
2861             mfree += sz;
2862             if (*plargest_free < sz)
2863                 *plargest_free = sz;
2864             ++nfree;
2865           }
2866           q = next_chunk(q);
2867         }
2868         s = s->next;
2869       }
2870 
2871       nm.arena    = sum;
2872       nm.ordblks  = nfree;
2873       nm.hblkhd   = m->footprint - sum;
2874       nm.usmblks  = m->max_footprint;
2875       nm.uordblks = m->footprint - mfree;
2876       nm.fordblks = mfree;
2877       nm.keepcost = m->topsize;
2878     }
2879 
2880     POSTACTION(m);
2881   }
2882   return nm;
2883 }
2884 #endif /* !NO_MALLINFO */
2885 
internal_malloc_stats(mstate m)2886 static void internal_malloc_stats(mstate m) {
2887   if (!PREACTION(m)) {
2888     size_t maxfp = 0;
2889     size_t fp = 0;
2890     size_t used = 0;
2891     check_malloc_state(m);
2892     if (is_initialized(m)) {
2893       msegmentptr s = &m->seg;
2894       maxfp = m->max_footprint;
2895       fp = m->footprint;
2896       used = fp - (m->topsize + TOP_FOOT_SIZE);
2897 
2898       while (s != 0) {
2899         mchunkptr q = align_as_chunk(s->base);
2900         while (segment_holds(s, q) &&
2901                q != m->top && q->head != FENCEPOST_HEAD) {
2902           if (!cinuse(q))
2903             used -= chunksize(q);
2904           q = next_chunk(q);
2905         }
2906         s = s->next;
2907       }
2908     }
2909 
2910    // fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
2911    // fprintf(stderr, "system bytes     = %10lu\n", (unsigned long)(fp));
2912    // fprintf(stderr, "in use bytes     = %10lu\n", (unsigned long)(used));
2913 
2914     POSTACTION(m);
2915   }
2916 }
2917 
2918 /* ----------------------- Operations on smallbins ----------------------- */
2919 
2920 /*
2921   Various forms of linking and unlinking are defined as macros.  Even
2922   the ones for trees, which are very long but have very short typical
2923   paths.  This is ugly but reduces reliance on inlining support of
2924   compilers.
2925 */
2926 
2927 /* Link a free chunk into a smallbin  */
2928 #define insert_small_chunk(M, P, S) {\
2929   bindex_t I  = small_index(S);\
2930   mchunkptr B = smallbin_at(M, I);\
2931   mchunkptr F = B;\
2932   assert(S >= MIN_CHUNK_SIZE);\
2933   if (!smallmap_is_marked(M, I))\
2934     mark_smallmap(M, I);\
2935   else if (RTCHECK(ok_address(M, B->fd)))\
2936     F = B->fd;\
2937   else {\
2938     CORRUPTION_ERROR_ACTION(M);\
2939   }\
2940   B->fd = P;\
2941   F->bk = P;\
2942   P->fd = F;\
2943   P->bk = B;\
2944 }
2945 
2946 /* Unlink a chunk from a smallbin  */
2947 #define unlink_small_chunk(M, P, S) {\
2948   mchunkptr F = P->fd;\
2949   mchunkptr B = P->bk;\
2950   bindex_t I = small_index(S);\
2951   assert(P != B);\
2952   assert(P != F);\
2953   assert(chunksize(P) == small_index2size(I));\
2954   if (F == B)\
2955     clear_smallmap(M, I);\
2956   else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
2957                    (B == smallbin_at(M,I) || ok_address(M, B)))) {\
2958     F->bk = B;\
2959     B->fd = F;\
2960   }\
2961   else {\
2962     CORRUPTION_ERROR_ACTION(M);\
2963   }\
2964 }
2965 
2966 /* Unlink the first chunk from a smallbin */
2967 #define unlink_first_small_chunk(M, B, P, I) {\
2968   mchunkptr F = P->fd;\
2969   assert(P != B);\
2970   assert(P != F);\
2971   assert(chunksize(P) == small_index2size(I));\
2972   if (B == F)\
2973     clear_smallmap(M, I);\
2974   else if (RTCHECK(ok_address(M, F))) {\
2975     B->fd = F;\
2976     F->bk = B;\
2977   }\
2978   else {\
2979     CORRUPTION_ERROR_ACTION(M);\
2980   }\
2981 }
2982 
2983 /* Replace dv node, binning the old one */
2984 /* Used only when dvsize known to be small */
2985 #define replace_dv(M, P, S) {\
2986   size_t DVS = M->dvsize;\
2987   if (DVS != 0) {\
2988     mchunkptr DV = M->dv;\
2989     assert(is_small(DVS));\
2990     insert_small_chunk(M, DV, DVS);\
2991   }\
2992   M->dvsize = S;\
2993   M->dv = P;\
2994 }
2995 
2996 /* ------------------------- Operations on trees ------------------------- */
2997 
2998 /* Insert chunk into tree */
2999 #define insert_large_chunk(M, X, S) {\
3000   tbinptr* H;\
3001   bindex_t I;\
3002   compute_tree_index(S, I);\
3003   H = treebin_at(M, I);\
3004   X->index = I;\
3005   X->child[0] = X->child[1] = 0;\
3006   if (!treemap_is_marked(M, I)) {\
3007     mark_treemap(M, I);\
3008     *H = X;\
3009     X->parent = (tchunkptr)H;\
3010     X->fd = X->bk = X;\
3011   }\
3012   else {\
3013     tchunkptr T = *H;\
3014     size_t K = S << leftshift_for_tree_index(I);\
3015     for (;;) {\
3016       if (chunksize(T) != S) {\
3017         tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
3018         K <<= 1;\
3019         if (*C != 0)\
3020           T = *C;\
3021         else if (RTCHECK(ok_address(M, C))) {\
3022           *C = X;\
3023           X->parent = T;\
3024           X->fd = X->bk = X;\
3025           break;\
3026         }\
3027         else {\
3028           CORRUPTION_ERROR_ACTION(M);\
3029           break;\
3030         }\
3031       }\
3032       else {\
3033         tchunkptr F = T->fd;\
3034         if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
3035           T->fd = F->bk = X;\
3036           X->fd = F;\
3037           X->bk = T;\
3038           X->parent = 0;\
3039           break;\
3040         }\
3041         else {\
3042           CORRUPTION_ERROR_ACTION(M);\
3043           break;\
3044         }\
3045       }\
3046     }\
3047   }\
3048 }
3049 
3050 /*
3051   Unlink steps:
3052 
3053   1. If x is a chained node, unlink it from its same-sized fd/bk links
3054      and choose its bk node as its replacement.
3055   2. If x was the last node of its size, but not a leaf node, it must
3056      be replaced with a leaf node (not merely one with an open left or
3057      right), to make sure that lefts and rights of descendents
3058      correspond properly to bit masks.  We use the rightmost descendent
3059      of x.  We could use any other leaf, but this is easy to locate and
3060      tends to counteract removal of leftmosts elsewhere, and so keeps
3061      paths shorter than minimally guaranteed.  This doesn't loop much
3062      because on average a node in a tree is near the bottom.
3063   3. If x is the base of a chain (i.e., has parent links) relink
3064      x's parent and children to x's replacement (or null if none).
3065 */
3066 
3067 #define unlink_large_chunk(M, X) {\
3068   tchunkptr XP = X->parent;\
3069   tchunkptr R;\
3070   if (X->bk != X) {\
3071     tchunkptr F = X->fd;\
3072     R = X->bk;\
3073     if (RTCHECK(ok_address(M, F))) {\
3074       F->bk = R;\
3075       R->fd = F;\
3076     }\
3077     else {\
3078       CORRUPTION_ERROR_ACTION(M);\
3079     }\
3080   }\
3081   else {\
3082     tchunkptr* RP;\
3083     if (((R = *(RP = &(X->child[1]))) != 0) ||\
3084         ((R = *(RP = &(X->child[0]))) != 0)) {\
3085       tchunkptr* CP;\
3086       while ((*(CP = &(R->child[1])) != 0) ||\
3087              (*(CP = &(R->child[0])) != 0)) {\
3088         R = *(RP = CP);\
3089       }\
3090       if (RTCHECK(ok_address(M, RP)))\
3091         *RP = 0;\
3092       else {\
3093         CORRUPTION_ERROR_ACTION(M);\
3094       }\
3095     }\
3096   }\
3097   if (XP != 0) {\
3098     tbinptr* H = treebin_at(M, X->index);\
3099     if (X == *H) {\
3100       if ((*H = R) == 0) \
3101         clear_treemap(M, X->index);\
3102     }\
3103     else if (RTCHECK(ok_address(M, XP))) {\
3104       if (XP->child[0] == X) \
3105         XP->child[0] = R;\
3106       else \
3107         XP->child[1] = R;\
3108     }\
3109     else\
3110       CORRUPTION_ERROR_ACTION(M);\
3111     if (R != 0) {\
3112       if (RTCHECK(ok_address(M, R))) {\
3113         tchunkptr C0, C1;\
3114         R->parent = XP;\
3115         if ((C0 = X->child[0]) != 0) {\
3116           if (RTCHECK(ok_address(M, C0))) {\
3117             R->child[0] = C0;\
3118             C0->parent = R;\
3119           }\
3120           else\
3121             CORRUPTION_ERROR_ACTION(M);\
3122         }\
3123         if ((C1 = X->child[1]) != 0) {\
3124           if (RTCHECK(ok_address(M, C1))) {\
3125             R->child[1] = C1;\
3126             C1->parent = R;\
3127           }\
3128           else\
3129             CORRUPTION_ERROR_ACTION(M);\
3130         }\
3131       }\
3132       else\
3133         CORRUPTION_ERROR_ACTION(M);\
3134     }\
3135   }\
3136 }
3137 
3138 /* Relays to large vs small bin operations */
3139 
3140 #define insert_chunk(M, P, S)\
3141   if (is_small(S)) insert_small_chunk(M, P, S)\
3142   else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3143 
3144 #define unlink_chunk(M, P, S)\
3145   if (is_small(S)) unlink_small_chunk(M, P, S)\
3146   else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3147 
3148 
3149 /* Relays to internal calls to malloc/free from realloc, memalign etc */
3150 
3151 #if ONLY_MSPACES
3152 #define internal_malloc(m, b) mspace_malloc(m, b)
3153 #define internal_free(m, mem) mspace_free(m,mem);
3154 #else /* ONLY_MSPACES */
3155 #if MSPACES
3156 #define internal_malloc(m, b)\
3157    (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
3158 #define internal_free(m, mem)\
3159    if (m == gm) dlfree(mem); else mspace_free(m,mem);
3160 #else /* MSPACES */
3161 #define internal_malloc(m, b) dlmalloc(b)
3162 #define internal_free(m, mem) dlfree(mem)
3163 #endif /* MSPACES */
3164 #endif /* ONLY_MSPACES */
3165 
3166 /* -----------------------  Direct-mmapping chunks ----------------------- */
3167 
3168 /*
3169   Directly mmapped chunks are set up with an offset to the start of
3170   the mmapped region stored in the prev_foot field of the chunk. This
3171   allows reconstruction of the required argument to MUNMAP when freed,
3172   and also allows adjustment of the returned chunk to meet alignment
3173   requirements (especially in memalign).  There is also enough space
3174   allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
3175   the PINUSE bit so frees can be checked.
3176 */
3177 
3178 /* Malloc using mmap */
mmap_alloc(mstate m,size_t nb)3179 static void* mmap_alloc(mstate m, size_t nb) {
3180   size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3181   if (mmsize > nb) {     /* Check for wrap around 0 */
3182     char* mm = (char*)(DIRECT_MMAP(mmsize));
3183     if (mm != CMFAIL) {
3184       size_t offset = align_offset(chunk2mem(mm));
3185       size_t psize = mmsize - offset - MMAP_FOOT_PAD;
3186       mchunkptr p = (mchunkptr)((unsigned long)mm + offset);
3187       p->prev_foot = offset | IS_MMAPPED_BIT;
3188       (p)->head = (psize|CINUSE_BIT);
3189       mark_inuse_foot(m, p, psize);
3190       chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
3191       chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
3192 
3193       if (mm < m->least_addr)
3194         m->least_addr = mm;
3195       if ((m->footprint += mmsize) > m->max_footprint)
3196         m->max_footprint = m->footprint;
3197       assert(is_aligned(chunk2mem(p)));
3198       check_mmapped_chunk(m, p);
3199       return chunk2mem(p);
3200     }
3201   }
3202   return 0;
3203 }
3204 
3205 /* Realloc using mmap */
mmap_resize(mstate m,mchunkptr oldp,size_t nb)3206 static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
3207   size_t oldsize = chunksize(oldp);
3208   if (is_small(nb)) /* Can't shrink mmap regions below small size */
3209     return 0;
3210   /* Keep old chunk if big enough but not too big */
3211   if (oldsize >= nb + SIZE_T_SIZE &&
3212       (oldsize - nb) <= (mparams.granularity << 1))
3213     return oldp;
3214   else {
3215     size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
3216     size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
3217     size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
3218                                          CHUNK_ALIGN_MASK);
3219     char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
3220                                   oldmmsize, newmmsize, 1);
3221     if (cp != CMFAIL) {
3222       mchunkptr newp = (mchunkptr)((unsigned long)cp + offset);
3223       size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
3224       newp->head = (psize|CINUSE_BIT);
3225       mark_inuse_foot(m, newp, psize);
3226       chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
3227       chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
3228 
3229       if (cp < m->least_addr)
3230         m->least_addr = cp;
3231       if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
3232         m->max_footprint = m->footprint;
3233       check_mmapped_chunk(m, newp);
3234       return newp;
3235     }
3236   }
3237   return 0;
3238 }
3239 
3240 /* -------------------------- mspace management -------------------------- */
3241 
3242 /* Initialize top chunk and its size */
init_top(mstate m,mchunkptr p,size_t psize)3243 static void init_top(mstate m, mchunkptr p, size_t psize) {
3244   /* Ensure alignment */
3245   size_t offset = align_offset(chunk2mem(p));
3246   p = (mchunkptr)((unsigned long)p + offset);
3247   psize -= offset;
3248 
3249   m->top = p;
3250   m->topsize = psize;
3251   p->head = psize | PINUSE_BIT;
3252   /* set size of fake trailing chunk holding overhead space only once */
3253   chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
3254   m->trim_check = mparams.trim_threshold; /* reset on each update */
3255 }
3256 
3257 /* Initialize bins for a new mstate that is otherwise zeroed out */
init_bins(mstate m)3258 static void init_bins(mstate m) {
3259   /* Establish circular links for smallbins */
3260   bindex_t i;
3261   for (i = 0; i < NSMALLBINS; ++i) {
3262     sbinptr bin = smallbin_at(m,i);
3263     bin->fd = bin->bk = bin;
3264   }
3265 }
3266 
3267 #if PROCEED_ON_ERROR
3268 
3269 /* default corruption action */
reset_on_error(mstate m)3270 static void reset_on_error(mstate m) {
3271   int i;
3272   ++malloc_corruption_error_count;
3273   /* Reinitialize fields to forget about all memory */
3274   m->smallbins = m->treebins = 0;
3275   m->dvsize = m->topsize = 0;
3276   m->seg.base = 0;
3277   m->seg.size = 0;
3278   m->seg.next = 0;
3279   m->top = m->dv = 0;
3280   for (i = 0; i < NTREEBINS; ++i)
3281     *treebin_at(m, i) = 0;
3282   init_bins(m);
3283 }
3284 #endif /* PROCEED_ON_ERROR */
3285 
3286 /* Allocate chunk and prepend remainder with chunk in successor base. */
prepend_alloc(mstate m,char * newbase,char * oldbase,size_t nb)3287 static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
3288                            size_t nb) {
3289   mchunkptr p = align_as_chunk(newbase);
3290   mchunkptr oldfirst = align_as_chunk(oldbase);
3291   size_t psize = (char*)oldfirst - (char*)p;
3292   mchunkptr q = chunk_plus_offset(p, nb);
3293   size_t qsize = psize - nb;
3294   set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3295 
3296   assert((char*)oldfirst > (char*)q);
3297   assert(pinuse(oldfirst));
3298   assert(qsize >= MIN_CHUNK_SIZE);
3299 
3300   /* consolidate remainder with first chunk of old base */
3301   if (oldfirst == m->top) {
3302     size_t tsize = m->topsize += qsize;
3303     m->top = q;
3304     q->head = tsize | PINUSE_BIT;
3305     check_top_chunk(m, q);
3306   }
3307   else if (oldfirst == m->dv) {
3308     size_t dsize = m->dvsize += qsize;
3309     m->dv = q;
3310     set_size_and_pinuse_of_free_chunk(q, dsize);
3311   }
3312   else {
3313     if (!cinuse(oldfirst)) {
3314       size_t nsize = chunksize(oldfirst);
3315       unlink_chunk(m, oldfirst, nsize);
3316       oldfirst = chunk_plus_offset(oldfirst, nsize);
3317       qsize += nsize;
3318     }
3319     set_free_with_pinuse(q, qsize, oldfirst);
3320     insert_chunk(m, q, qsize);
3321     check_free_chunk(m, q);
3322   }
3323 
3324   check_malloced_chunk(m, chunk2mem(p), nb);
3325   return chunk2mem(p);
3326 }
3327 
3328 
3329 /* Add a segment to hold a new noncontiguous region */
add_segment(mstate m,char * tbase,size_t tsize,flag_t mmapped)3330 static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
3331   /* Determine locations and sizes of segment, fenceposts, old top */
3332   char* old_top = (char*)m->top;
3333   msegmentptr oldsp = segment_holding(m, old_top);
3334   char* old_end = oldsp->base + oldsp->size;
3335   size_t ssize = pad_request(sizeof(struct malloc_segment));
3336   char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3337   size_t offset = align_offset(chunk2mem(rawsp));
3338   char* asp = rawsp + offset;
3339   char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
3340   mchunkptr sp = (mchunkptr)((unsigned long)csp);
3341   msegmentptr ss = (msegmentptr)(chunk2mem(sp));
3342   mchunkptr tnext = chunk_plus_offset(sp, ssize);
3343   mchunkptr p = tnext;
3344   int nfences = 0;
3345 
3346   /* reset top to new space */
3347   init_top(m, (mchunkptr)((unsigned long)tbase), tsize - TOP_FOOT_SIZE);
3348 
3349   /* Set up segment record */
3350   assert(is_aligned(ss));
3351   set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
3352   *ss = m->seg; /* Push current record */
3353   m->seg.base = tbase;
3354   m->seg.size = tsize;
3355   m->seg.sflags = mmapped;
3356   m->seg.next = ss;
3357 
3358   /* Insert trailing fenceposts */
3359   for (;;) {
3360     mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
3361     p->head = FENCEPOST_HEAD;
3362     ++nfences;
3363     if ((char*)(&(nextp->head)) < old_end)
3364       p = nextp;
3365     else
3366       break;
3367   }
3368   assert(nfences >= 2);
3369 
3370   /* Insert the rest of old top into a bin as an ordinary free chunk */
3371   if (csp != old_top) {
3372     mchunkptr q = (mchunkptr)((unsigned long)old_top);
3373     size_t psize = csp - old_top;
3374     mchunkptr tn = chunk_plus_offset(q, psize);
3375     set_free_with_pinuse(q, psize, tn);
3376     insert_chunk(m, q, psize);
3377   }
3378 
3379   check_top_chunk(m, m->top);
3380 }
3381 
3382 /* -------------------------- System allocation -------------------------- */
3383 
3384 /* Get memory from system using MORECORE or MMAP */
sys_alloc(mstate m,size_t nb)3385 static void* sys_alloc(mstate m, size_t nb) {
3386   char* tbase = CMFAIL;
3387   size_t tsize = 0;
3388   flag_t mmap_flag = 0;
3389 
3390   init_mparams();
3391 
3392   /* Directly map large chunks */
3393   if (use_mmap(m) && nb >= mparams.mmap_threshold) {
3394     void* mem = mmap_alloc(m, nb);
3395     if (mem != 0)
3396       return mem;
3397   }
3398 
3399   /*
3400     Try getting memory in any of three ways (in most-preferred to
3401     least-preferred order):
3402     1. A call to MORECORE that can normally contiguously extend memory.
3403        (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
3404        or main space is mmapped or a previous contiguous call failed)
3405     2. A call to MMAP new space (disabled if not HAVE_MMAP).
3406        Note that under the default settings, if MORECORE is unable to
3407        fulfill a request, and HAVE_MMAP is true, then mmap is
3408        used as a noncontiguous system allocator. This is a useful backup
3409        strategy for systems with holes in address spaces -- in this case
3410        sbrk cannot contiguously expand the heap, but mmap may be able to
3411        find space.
3412     3. A call to MORECORE that cannot usually contiguously extend memory.
3413        (disabled if not HAVE_MORECORE)
3414   */
3415 
3416   if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
3417     char* br = CMFAIL;
3418     msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
3419     size_t asize = 0;
3420     {
3421         int ret_value=ACQUIRE_MORECORE_LOCK();
3422         if(ret_value)printf("%s.%d lock error\n",__FUNCTION__,__LINE__);
3423     }
3424 
3425     if (ss == 0) {  /* First time through or recovery */
3426       char* base = (char*)CALL_MORECORE(0);
3427       if (base != CMFAIL) {
3428         asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
3429         /* Adjust to end on a page boundary */
3430         if (!is_page_aligned(base))
3431           asize += (page_align((size_t)base) - (size_t)base);
3432         /* Can't call MORECORE if size is negative when treated as signed */
3433         if (asize < HALF_MAX_SIZE_T &&
3434             (br = (char*)(CALL_MORECORE(asize))) == base) {
3435           tbase = base;
3436           tsize = asize;
3437         }
3438       }
3439     }
3440     else {
3441       /* Subtract out existing available top space from MORECORE request. */
3442       asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
3443       /* Use mem here only if it did continuously extend old space */
3444       if (asize < HALF_MAX_SIZE_T &&
3445           (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
3446         tbase = br;
3447         tsize = asize;
3448       }
3449     }
3450 
3451     if (tbase == CMFAIL) {    /* Cope with partial failure */
3452       if (br != CMFAIL) {    /* Try to use/extend the space we did get */
3453         if (asize < HALF_MAX_SIZE_T &&
3454             asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
3455           size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
3456           if (esize < HALF_MAX_SIZE_T) {
3457             char* end = (char*)CALL_MORECORE(esize);
3458             if (end != CMFAIL)
3459               asize += esize;
3460             else {            /* Can't use; try to release */
3461               CALL_MORECORE(-asize);
3462               br = CMFAIL;
3463             }
3464           }
3465         }
3466       }
3467       if (br != CMFAIL) {    /* Use the space we did get */
3468         tbase = br;
3469         tsize = asize;
3470       }
3471       else
3472         disable_contiguous(m); /* Don't try contiguous path in the future */
3473     }
3474 
3475     RELEASE_MORECORE_LOCK();
3476   }
3477 
3478   if (HAVE_MMAP && tbase == CMFAIL) {  /* Try MMAP */
3479     size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
3480     size_t rsize = granularity_align(req);
3481     if (rsize > nb) { /* Fail if wraps around zero */
3482       char* mp = (char*)(CALL_MMAP(rsize));
3483       if (mp != CMFAIL) {
3484         tbase = mp;
3485         tsize = rsize;
3486         mmap_flag = IS_MMAPPED_BIT;
3487       }
3488     }
3489   }
3490 
3491   if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
3492     size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
3493     if (asize < HALF_MAX_SIZE_T) {
3494       char* br = CMFAIL;
3495       char* end = CMFAIL;
3496       {
3497         int ret_value=ACQUIRE_MORECORE_LOCK();
3498         if(ret_value)printf("%s.%d lock error\n",__FUNCTION__,__LINE__);
3499       }
3500       br = (char*)(CALL_MORECORE(asize));
3501       end = (char*)(CALL_MORECORE(0));
3502       RELEASE_MORECORE_LOCK();
3503       if (br != CMFAIL && end != CMFAIL && br < end) {
3504         size_t ssize = end - br;
3505         if (ssize > nb + TOP_FOOT_SIZE) {
3506           tbase = br;
3507           tsize = ssize;
3508         }
3509       }
3510     }
3511   }
3512 
3513   if (tbase != CMFAIL) {
3514 
3515     if ((m->footprint += tsize) > m->max_footprint)
3516       m->max_footprint = m->footprint;
3517 
3518     if (!is_initialized(m)) { /* first-time initialization */
3519       m->seg.base = m->least_addr = tbase;
3520       m->seg.size = tsize;
3521       m->seg.sflags = mmap_flag;
3522       m->magic = mparams.magic;
3523       init_bins(m);
3524       if (is_global(m))
3525         init_top(m, (mchunkptr)((unsigned long)tbase), tsize - TOP_FOOT_SIZE);
3526       else {
3527         /* Offset top by embedded malloc_state */
3528         mchunkptr mn = next_chunk(mem2chunk(m));
3529         init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
3530       }
3531     }
3532 
3533     else {
3534       /* Try to merge with an existing segment */
3535       msegmentptr sp = &m->seg;
3536       while (sp != 0 && tbase != sp->base + sp->size)
3537         sp = sp->next;
3538       if (sp != 0 &&
3539           !is_extern_segment(sp) &&
3540           (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
3541           segment_holds(sp, m->top)) { /* append */
3542         sp->size += tsize;
3543         init_top(m, m->top, m->topsize + tsize);
3544       }
3545       else {
3546         if (tbase < m->least_addr)
3547           m->least_addr = tbase;
3548         sp = &m->seg;
3549         while (sp != 0 && sp->base != tbase + tsize)
3550           sp = sp->next;
3551         if (sp != 0 &&
3552             !is_extern_segment(sp) &&
3553             (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
3554           char* oldbase = sp->base;
3555           sp->base = tbase;
3556           sp->size += tsize;
3557           return prepend_alloc(m, tbase, oldbase, nb);
3558         }
3559         else
3560           add_segment(m, tbase, tsize, mmap_flag);
3561       }
3562     }
3563 
3564     if (nb < m->topsize) { /* Allocate from new or extended top space */
3565       size_t rsize = m->topsize -= nb;
3566       mchunkptr p = m->top;
3567       mchunkptr r = m->top = chunk_plus_offset(p, nb);
3568       r->head = rsize | PINUSE_BIT;
3569       set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3570       check_top_chunk(m, m->top);
3571       check_malloced_chunk(m, chunk2mem(p), nb);
3572       return chunk2mem(p);
3573     }
3574   }
3575 
3576   MALLOC_FAILURE_ACTION;
3577   return 0;
3578 }
3579 
3580 /* -----------------------  system deallocation -------------------------- */
3581 
3582 /* Unmap and unlink any mmapped segments that don't contain used chunks */
release_unused_segments(mstate m)3583 static size_t release_unused_segments(mstate m) {
3584   size_t released = 0;
3585   msegmentptr pred = &m->seg;
3586   msegmentptr sp = pred->next;
3587   while (sp != 0) {
3588     char* base = sp->base;
3589     size_t size = sp->size;
3590     msegmentptr next = sp->next;
3591     if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
3592       mchunkptr p = align_as_chunk(base);
3593       size_t psize = chunksize(p);
3594       /* Can unmap if first chunk holds entire segment and not pinned */
3595       if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
3596         tchunkptr tp = (tchunkptr)p;
3597         assert(segment_holds(sp, (char*)sp));
3598         if (p == m->dv) {
3599           m->dv = 0;
3600           m->dvsize = 0;
3601         }
3602         else {
3603           unlink_large_chunk(m, tp);
3604         }
3605         if (CALL_MUNMAP(base, size) == 0) {
3606           released += size;
3607           m->footprint -= size;
3608           /* unlink obsoleted record */
3609           sp = pred;
3610           sp->next = next;
3611         }
3612         else { /* back out if cannot unmap */
3613           insert_large_chunk(m, tp, psize);
3614         }
3615       }
3616     }
3617     pred = sp;
3618     sp = next;
3619   }
3620   return released;
3621 }
3622 
sys_trim(mstate m,size_t pad)3623 static int sys_trim(mstate m, size_t pad) {
3624   size_t released = 0;
3625   if (pad < MAX_REQUEST && is_initialized(m)) {
3626     pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
3627 
3628     if (m->topsize > pad) {
3629       /* Shrink top space in granularity-size units, keeping at least one */
3630       size_t unit = mparams.granularity;
3631       size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
3632                       SIZE_T_ONE) * unit;
3633       msegmentptr sp = segment_holding(m, (char*)m->top);
3634 
3635       if (!is_extern_segment(sp)) {
3636         if (is_mmapped_segment(sp)) {
3637           if (HAVE_MMAP &&
3638               sp->size >= extra &&
3639               !has_segment_link(m, sp)) { /* can't shrink if pinned */
3640             //size_t newsize = sp->size - extra;
3641             /* Prefer mremap, fall back to munmap */
3642             if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
3643                 (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
3644               released = extra;
3645             }
3646           }
3647         }
3648         else if (HAVE_MORECORE) {
3649           if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
3650             extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
3651             {
3652                 int ret_value=ACQUIRE_MORECORE_LOCK();
3653                 if(ret_value)printf("%s.%d lock error\n",__FUNCTION__,__LINE__);
3654             }
3655           {
3656             /* Make sure end of memory is where we last set it. */
3657             char* old_br = (char*)(CALL_MORECORE(0));
3658             if (old_br == sp->base + sp->size) {
3659               char* rel_br = (char*)(CALL_MORECORE(-extra));
3660               char* new_br = (char*)(CALL_MORECORE(0));
3661               if (rel_br != CMFAIL && new_br < old_br)
3662                 released = old_br - new_br;
3663             }
3664           }
3665           RELEASE_MORECORE_LOCK();
3666         }
3667       }
3668 
3669       if (released != 0) {
3670         sp->size -= released;
3671         m->footprint -= released;
3672         init_top(m, m->top, m->topsize - released);
3673         check_top_chunk(m, m->top);
3674       }
3675     }
3676 
3677     /* Unmap any unused mmapped segments */
3678     if (HAVE_MMAP)
3679       released += release_unused_segments(m);
3680 
3681     /* On failure, disable autotrim to avoid repeated failed future calls */
3682     if (released == 0)
3683       m->trim_check = MAX_SIZE_T;
3684   }
3685 
3686   return (released != 0)? 1 : 0;
3687 }
3688 
3689 /* ---------------------------- malloc support --------------------------- */
3690 
3691 /* allocate a large request from the best fitting chunk in a treebin */
tmalloc_large(mstate m,size_t nb)3692 static void* tmalloc_large(mstate m, size_t nb) {
3693   tchunkptr v = 0;
3694   size_t rsize = -nb; /* Unsigned negation */
3695   tchunkptr t;
3696   bindex_t idx;
3697   compute_tree_index(nb, idx);
3698 
3699   if ((t = *treebin_at(m, idx)) != 0) {
3700     /* Traverse tree for this bin looking for node with size == nb */
3701     size_t sizebits = nb << leftshift_for_tree_index(idx);
3702     tchunkptr rst = 0;  /* The deepest untaken right subtree */
3703     for (;;) {
3704       tchunkptr rt;
3705       size_t trem = chunksize(t) - nb;
3706       if (trem < rsize) {
3707         v = t;
3708         if ((rsize = trem) == 0)
3709           break;
3710       }
3711       rt = t->child[1];
3712       t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3713       if (rt != 0 && rt != t)
3714         rst = rt;
3715       if (t == 0) {
3716         t = rst; /* set t to least subtree holding sizes > nb */
3717         break;
3718       }
3719       sizebits <<= 1;
3720     }
3721   }
3722 
3723   if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
3724     binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
3725     if (leftbits != 0) {
3726       bindex_t i;
3727       binmap_t leastbit = least_bit(leftbits);
3728       compute_bit2idx(leastbit, i);
3729       t = *treebin_at(m, i);
3730     }
3731   }
3732 
3733   while (t != 0) { /* find smallest of tree or subtree */
3734     size_t trem = chunksize(t) - nb;
3735     if (trem < rsize) {
3736       rsize = trem;
3737       v = t;
3738     }
3739     t = leftmost_child(t);
3740   }
3741 
3742   /*  If dv is a better fit, return 0 so malloc will use it */
3743   if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
3744     if (RTCHECK(ok_address(m, v))) { /* split */
3745       mchunkptr r = chunk_plus_offset(v, nb);
3746       assert(chunksize(v) == rsize + nb);
3747       if (RTCHECK(ok_next(v, r))) {
3748         unlink_large_chunk(m, v);
3749         if (rsize < MIN_CHUNK_SIZE)
3750           set_inuse_and_pinuse(m, v, (rsize + nb));
3751         else {
3752           set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3753           set_size_and_pinuse_of_free_chunk(r, rsize);
3754           insert_chunk(m, r, rsize);
3755         }
3756         return chunk2mem(v);
3757       }
3758     }
3759     CORRUPTION_ERROR_ACTION(m);
3760   }
3761   return 0;
3762 }
3763 
3764 /* allocate a small request from the best fitting chunk in a treebin */
tmalloc_small(mstate m,size_t nb)3765 static void* tmalloc_small(mstate m, size_t nb) {
3766   tchunkptr t, v;
3767   size_t rsize;
3768   bindex_t i;
3769   binmap_t leastbit = least_bit(m->treemap);
3770   compute_bit2idx(leastbit, i);
3771 
3772   v = t = *treebin_at(m, i);
3773   rsize = chunksize(t) - nb;
3774 
3775   while ((t = leftmost_child(t)) != 0) {
3776     size_t trem = chunksize(t) - nb;
3777     if (trem < rsize) {
3778       rsize = trem;
3779       v = t;
3780     }
3781   }
3782 
3783   if (RTCHECK(ok_address(m, v))) {
3784     mchunkptr r = chunk_plus_offset(v, nb);
3785     assert(chunksize(v) == rsize + nb);
3786     if (RTCHECK(ok_next(v, r))) {
3787       unlink_large_chunk(m, v);
3788       if (rsize < MIN_CHUNK_SIZE)
3789         set_inuse_and_pinuse(m, v, (rsize + nb));
3790       else {
3791         set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3792         set_size_and_pinuse_of_free_chunk(r, rsize);
3793         replace_dv(m, r, rsize);
3794       }
3795       return chunk2mem(v);
3796     }
3797   }
3798 
3799   CORRUPTION_ERROR_ACTION(m);
3800   return 0;
3801 }
3802 
3803 /* --------------------------- realloc support --------------------------- */
3804 
internal_realloc(mstate m,void * oldmem,size_t bytes)3805 static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
3806   if (bytes >= MAX_REQUEST) {
3807     MALLOC_FAILURE_ACTION;
3808     return 0;
3809   }
3810   if (!PREACTION(m)) {
3811     mchunkptr oldp = mem2chunk(oldmem);
3812     size_t oldsize = chunksize(oldp);
3813     mchunkptr next = chunk_plus_offset(oldp, oldsize);
3814     mchunkptr newp = 0;
3815     void* extra = 0;
3816 
3817     /* Try to either shrink or extend into top. Else malloc-copy-free */
3818 
3819     if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
3820                 ok_next(oldp, next) && ok_pinuse(next))) {
3821       size_t nb = request2size(bytes);
3822       if (is_mmapped(oldp))
3823         newp = mmap_resize(m, oldp, nb);
3824       else if (oldsize >= nb) { /* already big enough */
3825         size_t rsize = oldsize - nb;
3826         newp = oldp;
3827         if (rsize >= MIN_CHUNK_SIZE) {
3828           mchunkptr remainder = chunk_plus_offset(newp, nb);
3829           set_inuse(m, newp, nb);
3830           set_inuse(m, remainder, rsize);
3831           extra = chunk2mem(remainder);
3832         }
3833       }
3834       else if (next == m->top && oldsize + m->topsize > nb) {
3835         /* Expand into top */
3836         size_t newsize = oldsize + m->topsize;
3837         size_t newtopsize = newsize - nb;
3838         mchunkptr newtop = chunk_plus_offset(oldp, nb);
3839         set_inuse(m, oldp, nb);
3840         newtop->head = newtopsize |PINUSE_BIT;
3841         m->top = newtop;
3842         m->topsize = newtopsize;
3843         newp = oldp;
3844       }
3845     }
3846     else {
3847       USAGE_ERROR_ACTION(m, oldmem);
3848       POSTACTION(m);
3849       return 0;
3850     }
3851 
3852     POSTACTION(m);
3853 
3854     if (newp != 0) {
3855       if (extra != 0) {
3856         internal_free(m, extra);
3857       }
3858       check_inuse_chunk(m, newp);
3859       return chunk2mem(newp);
3860     }
3861     else {
3862       void* newmem = internal_malloc(m, bytes);
3863       if (newmem != 0) {
3864         size_t oc = oldsize - overhead_for(oldp);
3865         memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
3866         internal_free(m, oldmem);
3867       }
3868       return newmem;
3869     }
3870   }
3871   return 0;
3872 }
3873 
3874 /* --------------------------- memalign support -------------------------- */
3875 
internal_memalign(mstate m,size_t alignment,size_t bytes)3876 static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
3877   if (alignment <= MALLOC_ALIGNMENT)    /* Can just use malloc */
3878     return internal_malloc(m, bytes);
3879   if (alignment <  MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
3880     alignment = MIN_CHUNK_SIZE;
3881   if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
3882     size_t a = MALLOC_ALIGNMENT << 1;
3883     while (a < alignment) a <<= 1;
3884     alignment = a;
3885   }
3886 
3887   if (bytes >= MAX_REQUEST - alignment) {
3888     if (m != 0)  { /* Test isn't needed but avoids compiler warning */
3889       MALLOC_FAILURE_ACTION;
3890     }
3891   }
3892   else {
3893     size_t nb = request2size(bytes);
3894     size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
3895     char* mem = (char*)internal_malloc(m, req);
3896     if (mem != 0) {
3897       void* leader = 0;
3898       void* trailer = 0;
3899       mchunkptr p = mem2chunk(mem);
3900 
3901       if (PREACTION(m)) return 0;
3902       if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
3903         /*
3904           Find an aligned spot inside chunk.  Since we need to give
3905           back leading space in a chunk of at least MIN_CHUNK_SIZE, if
3906           the first calculation places us at a spot with less than
3907           MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
3908           We've allocated enough total room so that this is always
3909           possible.
3910         */
3911         char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
3912                                                        alignment -
3913                                                        SIZE_T_ONE)) &
3914                                              -alignment));
3915         char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
3916           br : br+alignment;
3917         mchunkptr newp = (mchunkptr)((unsigned long)pos);
3918         size_t leadsize = pos - (char*)(p);
3919         size_t newsize = chunksize(p) - leadsize;
3920 
3921         if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
3922           newp->prev_foot = p->prev_foot + leadsize;
3923           newp->head = (newsize|CINUSE_BIT);
3924         }
3925         else { /* Otherwise, give back leader, use the rest */
3926           set_inuse(m, newp, newsize);
3927           set_inuse(m, p, leadsize);
3928           leader = chunk2mem(p);
3929         }
3930         p = newp;
3931       }
3932 
3933       /* Give back spare room at the end */
3934       if (!is_mmapped(p)) {
3935         size_t size = chunksize(p);
3936         if (size > nb + MIN_CHUNK_SIZE) {
3937           size_t remainder_size = size - nb;
3938           mchunkptr remainder = chunk_plus_offset(p, nb);
3939           set_inuse(m, p, nb);
3940           set_inuse(m, remainder, remainder_size);
3941           trailer = chunk2mem(remainder);
3942         }
3943       }
3944 
3945       assert (chunksize(p) >= nb);
3946       assert((((size_t)(chunk2mem(p))) % alignment) == 0);
3947       check_inuse_chunk(m, p);
3948       POSTACTION(m);
3949       if (leader != 0) {
3950         internal_free(m, leader);
3951       }
3952       if (trailer != 0) {
3953         internal_free(m, trailer);
3954       }
3955       return chunk2mem(p);
3956     }
3957   }
3958   return 0;
3959 }
3960 
3961 /* ------------------------ comalloc/coalloc support --------------------- */
3962 
ialloc(mstate m,size_t n_elements,size_t * sizes,int opts,void * chunks[])3963 static void** ialloc(mstate m,
3964                      size_t n_elements,
3965                      size_t* sizes,
3966                      int opts,
3967                      void* chunks[]) {
3968   /*
3969     This provides common support for independent_X routines, handling
3970     all of the combinations that can result.
3971 
3972     The opts arg has:
3973     bit 0 set if all elements are same size (using sizes[0])
3974     bit 1 set if elements should be zeroed
3975   */
3976 
3977   size_t    element_size;   /* chunksize of each element, if all same */
3978   size_t    contents_size;  /* total size of elements */
3979   size_t    array_size;     /* request size of pointer array */
3980   void*     mem;            /* malloced aggregate space */
3981   mchunkptr p;              /* corresponding chunk */
3982   size_t    remainder_size; /* remaining bytes while splitting */
3983   void**    marray;         /* either "chunks" or malloced ptr array */
3984   mchunkptr array_chunk;    /* chunk for malloced ptr array */
3985   flag_t    was_enabled;    /* to disable mmap */
3986   size_t    size;
3987   size_t    i;
3988 
3989   /* compute array length, if needed */
3990   if (chunks != 0) {
3991     if (n_elements == 0)
3992       return chunks; /* nothing to do */
3993     marray = chunks;
3994     array_size = 0;
3995   }
3996   else {
3997     /* if empty req, must still return chunk representing empty array */
3998     if (n_elements == 0)
3999       return (void**)internal_malloc(m, 0);
4000     marray = 0;
4001     array_size = request2size(n_elements * (sizeof(void*)));
4002   }
4003 
4004   /* compute total element size */
4005   if (opts & 0x1) { /* all-same-size */
4006     element_size = request2size(*sizes);
4007     contents_size = n_elements * element_size;
4008   }
4009   else { /* add up all the sizes */
4010     element_size = 0;
4011     contents_size = 0;
4012     for (i = 0; i != n_elements; ++i)
4013       contents_size += request2size(sizes[i]);
4014   }
4015 
4016   size = contents_size + array_size;
4017 
4018   /*
4019      Allocate the aggregate chunk.  First disable direct-mmapping so
4020      malloc won't use it, since we would not be able to later
4021      free/realloc space internal to a segregated mmap region.
4022   */
4023   was_enabled = use_mmap(m);
4024   disable_mmap(m);
4025   mem = internal_malloc(m, size - CHUNK_OVERHEAD);
4026   if (was_enabled)
4027     enable_mmap(m);
4028   if (mem == 0)
4029     return 0;
4030 
4031   if (PREACTION(m)) return 0;
4032   p = mem2chunk(mem);
4033   remainder_size = chunksize(p);
4034 
4035   assert(!is_mmapped(p));
4036 
4037   if (opts & 0x2) {       /* optionally clear the elements */
4038     memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
4039   }
4040 
4041   /* If not provided, allocate the pointer array as final part of chunk */
4042   if (marray == 0) {
4043     size_t  array_chunk_size;
4044     array_chunk = chunk_plus_offset(p, contents_size);
4045     array_chunk_size = remainder_size - contents_size;
4046     marray = (void**) (chunk2mem(array_chunk));
4047     set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
4048     remainder_size = contents_size;
4049   }
4050 
4051   /* split out elements */
4052   for (i = 0; ; ++i) {
4053     marray[i] = chunk2mem(p);
4054     if (i != n_elements-1) {
4055       if (element_size != 0)
4056         size = element_size;
4057       else
4058         size = request2size(sizes[i]);
4059       remainder_size -= size;
4060       set_size_and_pinuse_of_inuse_chunk(m, p, size);
4061       p = chunk_plus_offset(p, size);
4062     }
4063     else { /* the final element absorbs any overallocation slop */
4064       set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
4065       break;
4066     }
4067   }
4068 
4069 #if DEBUG
4070   if (marray != chunks) {
4071     /* final element must have exactly exhausted chunk */
4072     if (element_size != 0) {
4073       assert(remainder_size == element_size);
4074     }
4075     else {
4076       assert(remainder_size == request2size(sizes[i]));
4077     }
4078     check_inuse_chunk(m, mem2chunk(marray));
4079   }
4080   for (i = 0; i != n_elements; ++i)
4081     check_inuse_chunk(m, mem2chunk(marray[i]));
4082 
4083 #endif /* DEBUG */
4084 
4085   POSTACTION(m);
4086   return marray;
4087 }
4088 
4089 
4090 /* -------------------------- public routines ---------------------------- */
4091 
4092 #if !ONLY_MSPACES
4093 
dlmalloc(size_t bytes)4094 void* dlmalloc(size_t bytes) {
4095   /*
4096      Basic algorithm:
4097      If a small request (< 256 bytes minus per-chunk overhead):
4098        1. If one exists, use a remainderless chunk in associated smallbin.
4099           (Remainderless means that there are too few excess bytes to
4100           represent as a chunk.)
4101        2. If it is big enough, use the dv chunk, which is normally the
4102           chunk adjacent to the one used for the most recent small request.
4103        3. If one exists, split the smallest available chunk in a bin,
4104           saving remainder in dv.
4105        4. If it is big enough, use the top chunk.
4106        5. If available, get memory from system and use it
4107      Otherwise, for a large request:
4108        1. Find the smallest available binned chunk that fits, and use it
4109           if it is better fitting than dv chunk, splitting if necessary.
4110        2. If better fitting than any binned chunk, use the dv chunk.
4111        3. If it is big enough, use the top chunk.
4112        4. If request size >= mmap threshold, try to directly mmap this chunk.
4113        5. If available, get memory from system and use it
4114 
4115      The ugly goto's here ensure that postaction occurs along all paths.
4116   */
4117 
4118   if (!PREACTION(gm)) {
4119     void* mem;
4120     size_t nb;
4121     if (bytes <= MAX_SMALL_REQUEST) {
4122       bindex_t idx;
4123       binmap_t smallbits;
4124       nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4125       idx = small_index(nb);
4126       smallbits = gm->smallmap >> idx;
4127 
4128       if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4129         mchunkptr b, p;
4130         idx += ~smallbits & 1;       /* Uses next bin if idx empty */
4131         b = smallbin_at(gm, idx);
4132         p = b->fd;
4133         assert(chunksize(p) == small_index2size(idx));
4134         unlink_first_small_chunk(gm, b, p, idx);
4135         set_inuse_and_pinuse(gm, p, small_index2size(idx));
4136         mem = chunk2mem(p);
4137         check_malloced_chunk(gm, mem, nb);
4138         goto postaction;
4139       }
4140 
4141       else if (nb > gm->dvsize) {
4142         if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4143           mchunkptr b, p, r;
4144           size_t rsize;
4145           bindex_t i;
4146           binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4147           binmap_t leastbit = least_bit(leftbits);
4148           compute_bit2idx(leastbit, i);
4149           b = smallbin_at(gm, i);
4150           p = b->fd;
4151           assert(chunksize(p) == small_index2size(i));
4152           unlink_first_small_chunk(gm, b, p, i);
4153           rsize = small_index2size(i) - nb;
4154           /* Fit here cannot be remainderless if 4byte sizes */
4155           if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4156             set_inuse_and_pinuse(gm, p, small_index2size(i));
4157           else {
4158             set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4159             r = chunk_plus_offset(p, nb);
4160             set_size_and_pinuse_of_free_chunk(r, rsize);
4161             replace_dv(gm, r, rsize);
4162           }
4163           mem = chunk2mem(p);
4164           check_malloced_chunk(gm, mem, nb);
4165           goto postaction;
4166         }
4167 
4168         else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
4169           check_malloced_chunk(gm, mem, nb);
4170           goto postaction;
4171         }
4172       }
4173     }
4174     else if (bytes >= MAX_REQUEST)
4175       nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4176     else {
4177       nb = pad_request(bytes);
4178       if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
4179         check_malloced_chunk(gm, mem, nb);
4180         goto postaction;
4181       }
4182     }
4183 
4184     if (nb <= gm->dvsize) {
4185       size_t rsize = gm->dvsize - nb;
4186       mchunkptr p = gm->dv;
4187       if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4188         mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
4189         gm->dvsize = rsize;
4190         set_size_and_pinuse_of_free_chunk(r, rsize);
4191         set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4192       }
4193       else { /* exhaust dv */
4194         size_t dvs = gm->dvsize;
4195         gm->dvsize = 0;
4196         gm->dv = 0;
4197         set_inuse_and_pinuse(gm, p, dvs);
4198       }
4199       mem = chunk2mem(p);
4200       check_malloced_chunk(gm, mem, nb);
4201       goto postaction;
4202     }
4203 
4204     else if (nb < gm->topsize) { /* Split top */
4205       size_t rsize = gm->topsize -= nb;
4206       mchunkptr p = gm->top;
4207       mchunkptr r = gm->top = chunk_plus_offset(p, nb);
4208       r->head = rsize | PINUSE_BIT;
4209       set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4210       mem = chunk2mem(p);
4211       check_top_chunk(gm, gm->top);
4212       check_malloced_chunk(gm, mem, nb);
4213       goto postaction;
4214     }
4215 
4216     mem = sys_alloc(gm, nb);
4217 
4218   postaction:
4219     POSTACTION(gm);
4220     return mem;
4221   }
4222 
4223   return 0;
4224 }
4225 
dlfree(void * mem)4226 void dlfree(void* mem) {
4227   /*
4228      Consolidate freed chunks with preceeding or succeeding bordering
4229      free chunks, if they exist, and then place in a bin.  Intermixed
4230      with special cases for top, dv, mmapped chunks, and usage errors.
4231   */
4232 
4233   if (mem != 0) {
4234     mchunkptr p  = mem2chunk(mem);
4235 #if FOOTERS
4236     mstate fm = get_mstate_for(p);
4237     if (!ok_magic(fm)) {
4238       USAGE_ERROR_ACTION(fm, p);
4239       return;
4240     }
4241 #else /* FOOTERS */
4242 #define fm gm
4243 #endif /* FOOTERS */
4244     if (!PREACTION(fm)) {
4245       check_inuse_chunk(fm, p);
4246       if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4247         size_t psize = chunksize(p);
4248         mchunkptr next = chunk_plus_offset(p, psize);
4249         if (!pinuse(p)) {
4250           size_t prevsize = p->prev_foot;
4251           if ((prevsize & IS_MMAPPED_BIT) != 0) {
4252             prevsize &= ~IS_MMAPPED_BIT;
4253             psize += prevsize + MMAP_FOOT_PAD;
4254             if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4255               fm->footprint -= psize;
4256             goto postaction;
4257           }
4258           else {
4259             mchunkptr prev = chunk_minus_offset(p, prevsize);
4260             psize += prevsize;
4261             p = prev;
4262             if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4263               if (p != fm->dv) {
4264                 unlink_chunk(fm, p, prevsize);
4265               }
4266               else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4267                 fm->dvsize = psize;
4268                 set_free_with_pinuse(p, psize, next);
4269                 goto postaction;
4270               }
4271             }
4272             else
4273               goto erroraction;
4274           }
4275         }
4276 
4277         if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4278           if (!cinuse(next)) {  /* consolidate forward */
4279             if (next == fm->top) {
4280               size_t tsize = fm->topsize += psize;
4281               fm->top = p;
4282               p->head = tsize | PINUSE_BIT;
4283               if (p == fm->dv) {
4284                 fm->dv = 0;
4285                 fm->dvsize = 0;
4286               }
4287               if (should_trim(fm, tsize))
4288                 sys_trim(fm, 0);
4289               goto postaction;
4290             }
4291             else if (next == fm->dv) {
4292               size_t dsize = fm->dvsize += psize;
4293               fm->dv = p;
4294               set_size_and_pinuse_of_free_chunk(p, dsize);
4295               goto postaction;
4296             }
4297             else {
4298               size_t nsize = chunksize(next);
4299               psize += nsize;
4300               unlink_chunk(fm, next, nsize);
4301               set_size_and_pinuse_of_free_chunk(p, psize);
4302               if (p == fm->dv) {
4303                 fm->dvsize = psize;
4304                 goto postaction;
4305               }
4306             }
4307           }
4308           else
4309             set_free_with_pinuse(p, psize, next);
4310           insert_chunk(fm, p, psize);
4311           check_free_chunk(fm, p);
4312           goto postaction;
4313         }
4314       }
4315     erroraction:
4316       USAGE_ERROR_ACTION(fm, p);
4317     postaction:
4318       POSTACTION(fm);
4319     }
4320   }
4321 #if !FOOTERS
4322 #undef fm
4323 #endif /* FOOTERS */
4324 }
4325 
dlcalloc(size_t n_elements,size_t elem_size)4326 void* dlcalloc(size_t n_elements, size_t elem_size) {
4327   void* mem;
4328   size_t req = 0;
4329   if (n_elements != 0) {
4330     req = n_elements * elem_size;
4331     if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4332         (req / n_elements != elem_size))
4333       req = MAX_SIZE_T; /* force downstream failure on overflow */
4334   }
4335   mem = dlmalloc(req);
4336   if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4337     memset(mem, 0, req);
4338   return mem;
4339 }
4340 
dlrealloc(void * oldmem,size_t bytes)4341 void* dlrealloc(void* oldmem, size_t bytes) {
4342   if (oldmem == 0)
4343     return dlmalloc(bytes);
4344 #ifdef REALLOC_ZERO_BYTES_FREES
4345   if (bytes == 0) {
4346     dlfree(oldmem);
4347     return 0;
4348   }
4349 #endif /* REALLOC_ZERO_BYTES_FREES */
4350   else {
4351 #if ! FOOTERS
4352     mstate m = gm;
4353 #else /* FOOTERS */
4354     mstate m = get_mstate_for(mem2chunk(oldmem));
4355     if (!ok_magic(m)) {
4356       USAGE_ERROR_ACTION(m, oldmem);
4357       return 0;
4358     }
4359 #endif /* FOOTERS */
4360     return internal_realloc(m, oldmem, bytes);
4361   }
4362 }
4363 
dlmemalign(size_t alignment,size_t bytes)4364 void* dlmemalign(size_t alignment, size_t bytes) {
4365   return internal_memalign(gm, alignment, bytes);
4366 }
4367 
dlindependent_calloc(size_t n_elements,size_t elem_size,void * chunks[])4368 void** dlindependent_calloc(size_t n_elements, size_t elem_size,
4369                                  void* chunks[]) {
4370   size_t sz = elem_size; /* serves as 1-element array */
4371   return ialloc(gm, n_elements, &sz, 3, chunks);
4372 }
4373 
dlindependent_comalloc(size_t n_elements,size_t sizes[],void * chunks[])4374 void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
4375                                    void* chunks[]) {
4376   return ialloc(gm, n_elements, sizes, 0, chunks);
4377 }
4378 
dlvalloc(size_t bytes)4379 void* dlvalloc(size_t bytes) {
4380   size_t pagesz;
4381   init_mparams();
4382   pagesz = mparams.page_size;
4383   return dlmemalign(pagesz, bytes);
4384 }
4385 
dlpvalloc(size_t bytes)4386 void* dlpvalloc(size_t bytes) {
4387   size_t pagesz;
4388   init_mparams();
4389   pagesz = mparams.page_size;
4390   return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
4391 }
4392 
dlmalloc_trim(size_t pad)4393 int dlmalloc_trim(size_t pad) {
4394   int result = 0;
4395   if (!PREACTION(gm)) {
4396     result = sys_trim(gm, pad);
4397     POSTACTION(gm);
4398   }
4399   return result;
4400 }
4401 
dlmalloc_footprint(void)4402 size_t dlmalloc_footprint(void) {
4403   return gm->footprint;
4404 }
4405 
dlmalloc_max_footprint(void)4406 size_t dlmalloc_max_footprint(void) {
4407   return gm->max_footprint;
4408 }
4409 
4410 #if !NO_MALLINFO
dlmallinfo(void)4411 struct mallinfo dlmallinfo(void) {
4412   size_t largest_free;
4413   return internal_mallinfo(gm, &largest_free);
4414 }
dlmallinfo_ex(size_t * plargest_free)4415 struct mallinfo dlmallinfo_ex(size_t *plargest_free) {
4416   return internal_mallinfo(gm, plargest_free);
4417 }
4418 #endif /* NO_MALLINFO */
4419 
dlmalloc_stats()4420 void dlmalloc_stats() {
4421   internal_malloc_stats(gm);
4422 }
4423 
dlmalloc_usable_size(void * mem)4424 size_t dlmalloc_usable_size(void* mem) {
4425   if (mem != 0) {
4426     mchunkptr p = mem2chunk(mem);
4427     if (cinuse(p))
4428       return chunksize(p) - overhead_for(p);
4429   }
4430   return 0;
4431 }
4432 
dlmallopt(int param_number,int value)4433 int dlmallopt(int param_number, int value) {
4434   return change_mparam(param_number, value);
4435 }
4436 
4437 #endif /* !ONLY_MSPACES */
4438 
4439 /* ----------------------------- user mspaces ---------------------------- */
4440 
4441 #if MSPACES
4442 
init_user_mstate(char * tbase,size_t tsize)4443 static mstate init_user_mstate(char* tbase, size_t tsize) {
4444   size_t msize = pad_request(sizeof(struct malloc_state));
4445   mchunkptr mn;
4446   mchunkptr msp = align_as_chunk(tbase);
4447   mstate m = (mstate)(chunk2mem(msp));
4448   memset(m, 0, msize);
4449   INITIAL_LOCK(&m->mutex);
4450   msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
4451   m->seg.base = m->least_addr = tbase;
4452   m->seg.size = m->footprint = m->max_footprint = tsize;
4453   m->magic = mparams.magic;
4454   m->mflags = mparams.default_mflags;
4455   disable_contiguous(m);
4456   init_bins(m);
4457   mn = next_chunk(mem2chunk(m));
4458   init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
4459   check_top_chunk(m, m->top);
4460   return m;
4461 }
4462 
create_mspace(size_t capacity,int locked)4463 mspace create_mspace(size_t capacity, int locked) {
4464   mstate m = 0;
4465   size_t msize = pad_request(sizeof(struct malloc_state));
4466   init_mparams(); /* Ensure pagesize etc initialized */
4467 
4468   if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
4469     size_t rs = ((capacity == 0)? mparams.granularity :
4470                  (capacity + TOP_FOOT_SIZE + msize));
4471     size_t tsize = granularity_align(rs);
4472     char* tbase = (char*)(CALL_MMAP(tsize));
4473     if (tbase != CMFAIL) {
4474       m = init_user_mstate(tbase, tsize);
4475       m->seg.sflags = IS_MMAPPED_BIT;
4476       set_lock(m, locked);
4477     }
4478   }
4479   return (mspace)m;
4480 }
4481 
create_mspace_with_base(void * base,size_t capacity,int locked)4482 mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
4483   mstate m = 0;
4484   size_t msize = pad_request(sizeof(struct malloc_state));
4485   init_mparams(); /* Ensure pagesize etc initialized */
4486 
4487   if (capacity > msize + TOP_FOOT_SIZE &&
4488       capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
4489     m = init_user_mstate((char*)base, capacity);
4490     m->seg.sflags = EXTERN_BIT;
4491     set_lock(m, locked);
4492   }
4493   return (mspace)m;
4494 }
4495 
destroy_mspace(mspace msp)4496 size_t destroy_mspace(mspace msp) {
4497   size_t freed = 0;
4498   mstate ms = (mstate)msp;
4499   if (ok_magic(ms)) {
4500     msegmentptr sp = &ms->seg;
4501     while (sp != 0) {
4502       char* base = sp->base;
4503       size_t size = sp->size;
4504       flag_t flag = sp->sflags;
4505       sp = sp->next;
4506       if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
4507           CALL_MUNMAP(base, size) == 0)
4508         freed += size;
4509     }
4510   }
4511   else {
4512     USAGE_ERROR_ACTION(ms,ms);
4513   }
4514   return freed;
4515 }
4516 
4517 /*
4518   mspace versions of routines are near-clones of the global
4519   versions. This is not so nice but better than the alternatives.
4520 */
4521 
4522 
mspace_malloc(mspace msp,size_t bytes)4523 void* mspace_malloc(mspace msp, size_t bytes) {
4524   mstate ms = (mstate)msp;
4525   if (!ok_magic(ms)) {
4526     USAGE_ERROR_ACTION(ms,ms);
4527     return 0;
4528   }
4529   if (!PREACTION(ms)) {
4530     void* mem;
4531     size_t nb;
4532     if (bytes <= MAX_SMALL_REQUEST) {
4533       bindex_t idx;
4534       binmap_t smallbits;
4535       nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4536       idx = small_index(nb);
4537       smallbits = ms->smallmap >> idx;
4538 
4539       if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4540         mchunkptr b, p;
4541         idx += ~smallbits & 1;       /* Uses next bin if idx empty */
4542         b = smallbin_at(ms, idx);
4543         p = b->fd;
4544         assert(chunksize(p) == small_index2size(idx));
4545         unlink_first_small_chunk(ms, b, p, idx);
4546         set_inuse_and_pinuse(ms, p, small_index2size(idx));
4547         mem = chunk2mem(p);
4548         check_malloced_chunk(ms, mem, nb);
4549         goto postaction;
4550       }
4551 
4552       else if (nb > ms->dvsize) {
4553         if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4554           mchunkptr b, p, r;
4555           size_t rsize;
4556           bindex_t i;
4557           binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4558           binmap_t leastbit = least_bit(leftbits);
4559           compute_bit2idx(leastbit, i);
4560           b = smallbin_at(ms, i);
4561           p = b->fd;
4562           assert(chunksize(p) == small_index2size(i));
4563           unlink_first_small_chunk(ms, b, p, i);
4564           rsize = small_index2size(i) - nb;
4565           /* Fit here cannot be remainderless if 4byte sizes */
4566           if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4567             set_inuse_and_pinuse(ms, p, small_index2size(i));
4568           else {
4569             set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4570             r = chunk_plus_offset(p, nb);
4571             set_size_and_pinuse_of_free_chunk(r, rsize);
4572             replace_dv(ms, r, rsize);
4573           }
4574           mem = chunk2mem(p);
4575           check_malloced_chunk(ms, mem, nb);
4576           goto postaction;
4577         }
4578 
4579         else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
4580           check_malloced_chunk(ms, mem, nb);
4581           goto postaction;
4582         }
4583       }
4584     }
4585     else if (bytes >= MAX_REQUEST)
4586       nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4587     else {
4588       nb = pad_request(bytes);
4589       if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
4590         check_malloced_chunk(ms, mem, nb);
4591         goto postaction;
4592       }
4593     }
4594 
4595     if (nb <= ms->dvsize) {
4596       size_t rsize = ms->dvsize - nb;
4597       mchunkptr p = ms->dv;
4598       if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4599         mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
4600         ms->dvsize = rsize;
4601         set_size_and_pinuse_of_free_chunk(r, rsize);
4602         set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4603       }
4604       else { /* exhaust dv */
4605         size_t dvs = ms->dvsize;
4606         ms->dvsize = 0;
4607         ms->dv = 0;
4608         set_inuse_and_pinuse(ms, p, dvs);
4609       }
4610       mem = chunk2mem(p);
4611       check_malloced_chunk(ms, mem, nb);
4612       goto postaction;
4613     }
4614 
4615     else if (nb < ms->topsize) { /* Split top */
4616       size_t rsize = ms->topsize -= nb;
4617       mchunkptr p = ms->top;
4618       mchunkptr r = ms->top = chunk_plus_offset(p, nb);
4619       r->head = rsize | PINUSE_BIT;
4620       set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4621       mem = chunk2mem(p);
4622       check_top_chunk(ms, ms->top);
4623       check_malloced_chunk(ms, mem, nb);
4624       goto postaction;
4625     }
4626 
4627     mem = sys_alloc(ms, nb);
4628 
4629   postaction:
4630     POSTACTION(ms);
4631     return mem;
4632   }
4633 
4634   return 0;
4635 }
4636 
mspace_free(mspace msp,void * mem)4637 void mspace_free(mspace msp, void* mem) {
4638   if (mem != 0) {
4639     mchunkptr p  = mem2chunk(mem);
4640 #if FOOTERS
4641     mstate fm = get_mstate_for(p);
4642 #else /* FOOTERS */
4643     mstate fm = (mstate)msp;
4644 #endif /* FOOTERS */
4645     if (!ok_magic(fm)) {
4646       USAGE_ERROR_ACTION(fm, p);
4647       return;
4648     }
4649     if (!PREACTION(fm)) {
4650       check_inuse_chunk(fm, p);
4651       if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4652         size_t psize = chunksize(p);
4653         mchunkptr next = chunk_plus_offset(p, psize);
4654         if (!pinuse(p)) {
4655           size_t prevsize = p->prev_foot;
4656           if ((prevsize & IS_MMAPPED_BIT) != 0) {
4657             prevsize &= ~IS_MMAPPED_BIT;
4658             psize += prevsize + MMAP_FOOT_PAD;
4659             if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4660               fm->footprint -= psize;
4661             goto postaction;
4662           }
4663           else {
4664             mchunkptr prev = chunk_minus_offset(p, prevsize);
4665             psize += prevsize;
4666             p = prev;
4667             if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4668               if (p != fm->dv) {
4669                 unlink_chunk(fm, p, prevsize);
4670               }
4671               else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4672                 fm->dvsize = psize;
4673                 set_free_with_pinuse(p, psize, next);
4674                 goto postaction;
4675               }
4676             }
4677             else
4678               goto erroraction;
4679           }
4680         }
4681 
4682         if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4683           if (!cinuse(next)) {  /* consolidate forward */
4684             if (next == fm->top) {
4685               size_t tsize = fm->topsize += psize;
4686               fm->top = p;
4687               p->head = tsize | PINUSE_BIT;
4688               if (p == fm->dv) {
4689                 fm->dv = 0;
4690                 fm->dvsize = 0;
4691               }
4692               if (should_trim(fm, tsize))
4693                 sys_trim(fm, 0);
4694               goto postaction;
4695             }
4696             else if (next == fm->dv) {
4697               size_t dsize = fm->dvsize += psize;
4698               fm->dv = p;
4699               set_size_and_pinuse_of_free_chunk(p, dsize);
4700               goto postaction;
4701             }
4702             else {
4703               size_t nsize = chunksize(next);
4704               psize += nsize;
4705               unlink_chunk(fm, next, nsize);
4706               set_size_and_pinuse_of_free_chunk(p, psize);
4707               if (p == fm->dv) {
4708                 fm->dvsize = psize;
4709                 goto postaction;
4710               }
4711             }
4712           }
4713           else
4714             set_free_with_pinuse(p, psize, next);
4715           insert_chunk(fm, p, psize);
4716           check_free_chunk(fm, p);
4717           goto postaction;
4718         }
4719       }
4720     erroraction:
4721       USAGE_ERROR_ACTION(fm, p);
4722     postaction:
4723       POSTACTION(fm);
4724     }
4725   }
4726 }
4727 
mspace_calloc(mspace msp,size_t n_elements,size_t elem_size)4728 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
4729   void* mem;
4730   size_t req = 0;
4731   mstate ms = (mstate)msp;
4732   if (!ok_magic(ms)) {
4733     USAGE_ERROR_ACTION(ms,ms);
4734     return 0;
4735   }
4736   if (n_elements != 0) {
4737     req = n_elements * elem_size;
4738     if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4739         (req / n_elements != elem_size))
4740       req = MAX_SIZE_T; /* force downstream failure on overflow */
4741   }
4742   mem = internal_malloc(ms, req);
4743   if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4744     memset(mem, 0, req);
4745   return mem;
4746 }
4747 
mspace_realloc(mspace msp,void * oldmem,size_t bytes)4748 void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
4749   if (oldmem == 0)
4750     return mspace_malloc(msp, bytes);
4751 #ifdef REALLOC_ZERO_BYTES_FREES
4752   if (bytes == 0) {
4753     mspace_free(msp, oldmem);
4754     return 0;
4755   }
4756 #endif /* REALLOC_ZERO_BYTES_FREES */
4757   else {
4758 #if FOOTERS
4759     mchunkptr p  = mem2chunk(oldmem);
4760     mstate ms = get_mstate_for(p);
4761 #else /* FOOTERS */
4762     mstate ms = (mstate)msp;
4763 #endif /* FOOTERS */
4764     if (!ok_magic(ms)) {
4765       USAGE_ERROR_ACTION(ms,ms);
4766       return 0;
4767     }
4768     return internal_realloc(ms, oldmem, bytes);
4769   }
4770 }
4771 
mspace_memalign(mspace msp,size_t alignment,size_t bytes)4772 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
4773   mstate ms = (mstate)msp;
4774   if (!ok_magic(ms)) {
4775     USAGE_ERROR_ACTION(ms,ms);
4776     return 0;
4777   }
4778   return internal_memalign(ms, alignment, bytes);
4779 }
4780 
mspace_independent_calloc(mspace msp,size_t n_elements,size_t elem_size,void * chunks[])4781 void** mspace_independent_calloc(mspace msp, size_t n_elements,
4782                                  size_t elem_size, void* chunks[]) {
4783   size_t sz = elem_size; /* serves as 1-element array */
4784   mstate ms = (mstate)msp;
4785   if (!ok_magic(ms)) {
4786     USAGE_ERROR_ACTION(ms,ms);
4787     return 0;
4788   }
4789   return ialloc(ms, n_elements, &sz, 3, chunks);
4790 }
4791 
mspace_independent_comalloc(mspace msp,size_t n_elements,size_t sizes[],void * chunks[])4792 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
4793                                    size_t sizes[], void* chunks[]) {
4794   mstate ms = (mstate)msp;
4795   if (!ok_magic(ms)) {
4796     USAGE_ERROR_ACTION(ms,ms);
4797     return 0;
4798   }
4799   return ialloc(ms, n_elements, sizes, 0, chunks);
4800 }
4801 
mspace_trim(mspace msp,size_t pad)4802 int mspace_trim(mspace msp, size_t pad) {
4803   int result = 0;
4804   mstate ms = (mstate)msp;
4805   if (ok_magic(ms)) {
4806     if (!PREACTION(ms)) {
4807       result = sys_trim(ms, pad);
4808       POSTACTION(ms);
4809     }
4810   }
4811   else {
4812     USAGE_ERROR_ACTION(ms,ms);
4813   }
4814   return result;
4815 }
4816 
mspace_malloc_stats(mspace msp)4817 void mspace_malloc_stats(mspace msp) {
4818   mstate ms = (mstate)msp;
4819   if (ok_magic(ms)) {
4820     internal_malloc_stats(ms);
4821   }
4822   else {
4823     USAGE_ERROR_ACTION(ms,ms);
4824   }
4825 }
4826 
mspace_footprint(mspace msp)4827 size_t mspace_footprint(mspace msp) {
4828   size_t result;
4829   mstate ms = (mstate)msp;
4830   if (ok_magic(ms)) {
4831     result = ms->footprint;
4832   }
4833   USAGE_ERROR_ACTION(ms,ms);
4834   return result;
4835 }
4836 
4837 
mspace_max_footprint(mspace msp)4838 size_t mspace_max_footprint(mspace msp) {
4839   size_t result;
4840   mstate ms = (mstate)msp;
4841   if (ok_magic(ms)) {
4842     result = ms->max_footprint;
4843   }
4844   USAGE_ERROR_ACTION(ms,ms);
4845   return result;
4846 }
4847 
4848 
4849 #if !NO_MALLINFO
mspace_mallinfo(mspace msp)4850 struct mallinfo mspace_mallinfo(mspace msp) {
4851   mstate ms = (mstate)msp;
4852   if (!ok_magic(ms)) {
4853     USAGE_ERROR_ACTION(ms,ms);
4854   }
4855   return internal_mallinfo(ms);
4856 }
4857 #endif /* NO_MALLINFO */
4858 
mspace_mallopt(int param_number,int value)4859 int mspace_mallopt(int param_number, int value) {
4860   return change_mparam(param_number, value);
4861 }
4862 
4863 #endif /* MSPACES */
4864 
4865 /* -------------------- Alternative MORECORE functions ------------------- */
4866 
4867 /*
4868   Guidelines for creating a custom version of MORECORE:
4869 
4870   * For best performance, MORECORE should allocate in multiples of pagesize.
4871   * MORECORE may allocate more memory than requested. (Or even less,
4872       but this will usually result in a malloc failure.)
4873   * MORECORE must not allocate memory when given argument zero, but
4874       instead return one past the end address of memory from previous
4875       nonzero call.
4876   * For best performance, consecutive calls to MORECORE with positive
4877       arguments should return increasing addresses, indicating that
4878       space has been contiguously extended.
4879   * Even though consecutive calls to MORECORE need not return contiguous
4880       addresses, it must be OK for malloc'ed chunks to span multiple
4881       regions in those cases where they do happen to be contiguous.
4882   * MORECORE need not handle negative arguments -- it may instead
4883       just return MFAIL when given negative arguments.
4884       Negative arguments are always multiples of pagesize. MORECORE
4885       must not misinterpret negative args as large positive unsigned
4886       args. You can suppress all such calls from even occurring by defining
4887       MORECORE_CANNOT_TRIM,
4888 
4889   As an example alternative MORECORE, here is a custom allocator
4890   kindly contributed for pre-OSX macOS.  It uses virtually but not
4891   necessarily physically contiguous non-paged memory (locked in,
4892   present and won't get swapped out).  You can use it by uncommenting
4893   this section, adding some #includes, and setting up the appropriate
4894   defines above:
4895 
4896       #define MORECORE osMoreCore
4897 
4898   There is also a shutdown routine that should somehow be called for
4899   cleanup upon program exit.
4900 
4901   #define MAX_POOL_ENTRIES 100
4902   #define MINIMUM_MORECORE_SIZE  (64 * 1024U)
4903   static int next_os_pool;
4904   void *our_os_pools[MAX_POOL_ENTRIES];
4905 
4906   void *osMoreCore(int size)
4907   {
4908     void *ptr = 0;
4909     static void *sbrk_top = 0;
4910 
4911     if (size > 0)
4912     {
4913       if (size < MINIMUM_MORECORE_SIZE)
4914          size = MINIMUM_MORECORE_SIZE;
4915       if (CurrentExecutionLevel() == kTaskLevel)
4916          ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
4917       if (ptr == 0)
4918       {
4919         return (void *) MFAIL;
4920       }
4921       // save ptrs so they can be freed during cleanup
4922       our_os_pools[next_os_pool] = ptr;
4923       next_os_pool++;
4924       ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
4925       sbrk_top = (char *) ptr + size;
4926       return ptr;
4927     }
4928     else if (size < 0)
4929     {
4930       // we don't currently support shrink behavior
4931       return (void *) MFAIL;
4932     }
4933     else
4934     {
4935       return sbrk_top;
4936     }
4937   }
4938 
4939   // cleanup any allocated memory pools
4940   // called as last thing before shutting down driver
4941 
4942   void osCleanupMem(void)
4943   {
4944     void **ptr;
4945 
4946     for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
4947       if (*ptr)
4948       {
4949          PoolDeallocate(*ptr);
4950          *ptr = 0;
4951       }
4952   }
4953 
4954 */
4955 
4956 
4957 /* -----------------------------------------------------------------------
4958 History:
4959     V2.8.3 Thu Sep 22 11:16:32 2005  Doug Lea  (dl at gee)
4960       * Add max_footprint functions
4961       * Ensure all appropriate literals are size_t
4962       * Fix conditional compilation problem for some #define settings
4963       * Avoid concatenating segments with the one provided
4964         in create_mspace_with_base
4965       * Rename some variables to avoid compiler shadowing warnings
4966       * Use explicit lock initialization.
4967       * Better handling of sbrk interference.
4968       * Simplify and fix segment insertion, trimming and mspace_destroy
4969       * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
4970       * Thanks especially to Dennis Flanagan for help on these.
4971 
4972     V2.8.2 Sun Jun 12 16:01:10 2005  Doug Lea  (dl at gee)
4973       * Fix memalign brace error.
4974 
4975     V2.8.1 Wed Jun  8 16:11:46 2005  Doug Lea  (dl at gee)
4976       * Fix improper #endif nesting in C++
4977       * Add explicit casts needed for C++
4978 
4979     V2.8.0 Mon May 30 14:09:02 2005  Doug Lea  (dl at gee)
4980       * Use trees for large bins
4981       * Support mspaces
4982       * Use segments to unify sbrk-based and mmap-based system allocation,
4983         removing need for emulation on most platforms without sbrk.
4984       * Default safety checks
4985       * Optional footer checks. Thanks to William Robertson for the idea.
4986       * Internal code refactoring
4987       * Incorporate suggestions and platform-specific changes.
4988         Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
4989         Aaron Bachmann,  Emery Berger, and others.
4990       * Speed up non-fastbin processing enough to remove fastbins.
4991       * Remove useless cfree() to avoid conflicts with other apps.
4992       * Remove internal memcpy, memset. Compilers handle builtins better.
4993       * Remove some options that no one ever used and rename others.
4994 
4995     V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
4996       * Fix malloc_state bitmap array misdeclaration
4997 
4998     V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
4999       * Allow tuning of FIRST_SORTED_BIN_SIZE
5000       * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
5001       * Better detection and support for non-contiguousness of MORECORE.
5002         Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
5003       * Bypass most of malloc if no frees. Thanks To Emery Berger.
5004       * Fix freeing of old top non-contiguous chunk im sysmalloc.
5005       * Raised default trim and map thresholds to 256K.
5006       * Fix mmap-related #defines. Thanks to Lubos Lunak.
5007       * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
5008       * Branch-free bin calculation
5009       * Default trim and mmap thresholds now 256K.
5010 
5011     V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
5012       * Introduce independent_comalloc and independent_calloc.
5013         Thanks to Michael Pachos for motivation and help.
5014       * Make optional .h file available
5015       * Allow > 2GB requests on 32bit systems.
5016       * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
5017         Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
5018         and Anonymous.
5019       * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
5020         helping test this.)
5021       * memalign: check alignment arg
5022       * realloc: don't try to shift chunks backwards, since this
5023         leads to  more fragmentation in some programs and doesn't
5024         seem to help in any others.
5025       * Collect all cases in malloc requiring system memory into sysmalloc
5026       * Use mmap as backup to sbrk
5027       * Place all internal state in malloc_state
5028       * Introduce fastbins (although similar to 2.5.1)
5029       * Many minor tunings and cosmetic improvements
5030       * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
5031       * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
5032         Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
5033       * Include errno.h to support default failure action.
5034 
5035     V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
5036       * return null for negative arguments
5037       * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
5038          * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
5039           (e.g. WIN32 platforms)
5040          * Cleanup header file inclusion for WIN32 platforms
5041          * Cleanup code to avoid Microsoft Visual C++ compiler complaints
5042          * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
5043            memory allocation routines
5044          * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
5045          * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
5046            usage of 'assert' in non-WIN32 code
5047          * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
5048            avoid infinite loop
5049       * Always call 'fREe()' rather than 'free()'
5050 
5051     V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
5052       * Fixed ordering problem with boundary-stamping
5053 
5054     V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
5055       * Added pvalloc, as recommended by H.J. Liu
5056       * Added 64bit pointer support mainly from Wolfram Gloger
5057       * Added anonymously donated WIN32 sbrk emulation
5058       * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
5059       * malloc_extend_top: fix mask error that caused wastage after
5060         foreign sbrks
5061       * Add linux mremap support code from HJ Liu
5062 
5063     V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
5064       * Integrated most documentation with the code.
5065       * Add support for mmap, with help from
5066         Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5067       * Use last_remainder in more cases.
5068       * Pack bins using idea from  colin@nyx10.cs.du.edu
5069       * Use ordered bins instead of best-fit threshhold
5070       * Eliminate block-local decls to simplify tracing and debugging.
5071       * Support another case of realloc via move into top
5072       * Fix error occuring when initial sbrk_base not word-aligned.
5073       * Rely on page size for units instead of SBRK_UNIT to
5074         avoid surprises about sbrk alignment conventions.
5075       * Add mallinfo, mallopt. Thanks to Raymond Nijssen
5076         (raymond@es.ele.tue.nl) for the suggestion.
5077       * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
5078       * More precautions for cases where other routines call sbrk,
5079         courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5080       * Added macros etc., allowing use in linux libc from
5081         H.J. Lu (hjl@gnu.ai.mit.edu)
5082       * Inverted this history list
5083 
5084     V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
5085       * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
5086       * Removed all preallocation code since under current scheme
5087         the work required to undo bad preallocations exceeds
5088         the work saved in good cases for most test programs.
5089       * No longer use return list or unconsolidated bins since
5090         no scheme using them consistently outperforms those that don't
5091         given above changes.
5092       * Use best fit for very large chunks to prevent some worst-cases.
5093       * Added some support for debugging
5094 
5095     V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
5096       * Removed footers when chunks are in use. Thanks to
5097         Paul Wilson (wilson@cs.texas.edu) for the suggestion.
5098 
5099     V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
5100       * Added malloc_trim, with help from Wolfram Gloger
5101         (wmglo@Dent.MED.Uni-Muenchen.DE).
5102 
5103     V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
5104 
5105     V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
5106       * realloc: try to expand in both directions
5107       * malloc: swap order of clean-bin strategy;
5108       * realloc: only conditionally expand backwards
5109       * Try not to scavenge used bins
5110       * Use bin counts as a guide to preallocation
5111       * Occasionally bin return list chunks in first scan
5112       * Add a few optimizations from colin@nyx10.cs.du.edu
5113 
5114     V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
5115       * faster bin computation & slightly different binning
5116       * merged all consolidations to one part of malloc proper
5117          (eliminating old malloc_find_space & malloc_clean_bin)
5118       * Scan 2 returns chunks (not just 1)
5119       * Propagate failure in realloc if malloc returns 0
5120       * Add stuff to allow compilation on non-ANSI compilers
5121           from kpv@research.att.com
5122 
5123     V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
5124       * removed potential for odd address access in prev_chunk
5125       * removed dependency on getpagesize.h
5126       * misc cosmetics and a bit more internal documentation
5127       * anticosmetics: mangled names in macros to evade debugger strangeness
5128       * tested on sparc, hp-700, dec-mips, rs6000
5129           with gcc & native cc (hp, dec only) allowing
5130           Detlefs & Zorn comparison study (in SIGPLAN Notices.)
5131 
5132     Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
5133       * Based loosely on libg++-1.2X malloc. (It retains some of the overall
5134          structure of old version,  but most details differ.)
5135 
5136 */
5137