0.1-0.5:
    - Proof of concept implementation by Hans Van den Eynden and Yves Younan
0.6-0.7:
    - Bug fixes by Yves Younan        
0.8-1.0.beta4:
    - Reimplementation from scratch by Yves Younan        
1.0.beta4:
    - Public release
1.0.beta5:
    - Prev_chunkinfo speeded up, was really slow because of the way we did lookups
    - A freechunkinfo region is now freed when it is completely empty and not the current one
   1.0 (Rainer Wichmann [support at la dash samhna dot org]):
   ---------------------

   Compiler warnings fixed
   Define REALLOC_ZERO_BYTES_FREES because it's what GNU libc does
       (and what the standard says)
   Removed unused code
   Fix       assert(aligned_OK(chunk(newp)));
         ->  assert(aligned_OK(chunk(oldp)));
   Fix statistics in sYSMALLOc
   Fix overwrite of av->top in sYSMALLOc
   Provide own assert(), glibc assert() doesn't work (calls malloc)
   Fix bug in mEMALIGn(), put remainder in hashtable before calling fREe
   Remove cfree, independent_cmalloc, independent_comalloc (untested
       public functions not covered by any standard)
   Provide posix_memalign (that one is in the standard)
   Move the malloc_state struct to mmapped memory protected by guard pages
   Add arc4random function to initialize random canary on startup
   Implement random canary at end of (re|m)alloced/memaligned buffer,
       check at free/realloc
   Remove code conditional on !HAVE_MMAP, since mmap is required anyway.
   Use standard HAVE_foo macros (as generated by autoconf) instead of LACKS_foo

   Profiling: Reorder branches in hashtable_add, next_chunkinfo,
                  prev_chunkinfo, hashtable_insert, mALLOc, fREe, request2size,
                  checked_request2size (gcc predicts if{} branch to be taken).
              Use UNLIKELY macro (gcc __builtin_expect()) where branch   
                  reordering would make the code awkward.

   Portability: Hashtable always covers full 32bit address space to
                avoid assumptions about memory layout.
   Portability: Try hard to enforce mapping of mmapped memory into
                32bit address space, even on 64bit systems.
   Portability: Provide a dnmalloc_pthread_init() function, since
                pthread locking on HP-UX only works if initialized
                after the application has entered main().
   Portability: On *BSD, pthread_mutex_lock is unusable since it
                calls malloc, use spinlocks instead.
   Portability: Dynamically detect whether the heap is within
                32bit address range (e.g. on Linux x86_64, it isn't).
                Don't use sbrk() if the heap is mapped to an address
                outside the 32bit range, since this doesn't work with
                the hashtable. New macro morecore32bit.

   Success on: HP-UX 11.11/pthread, Linux/pthread (32/64 bit),
               FreeBSD/pthread, and Solaris 10 i386/pthread.
   Fail    on: OpenBSD/pthread (in  _thread_machdep_save_float_state),
               might be related to OpenBSD pthread internals (??).
               Non-treaded version (#undef USE_MALLOC_LOC)
               works on OpenBSD.

   There may be some bugs left in this version. please use with caution.
