malloc & sysmalloc

Podržite HackTricks

Rezime redosleda alokacije

(U ovom rezimeu nisu objašnjene provere i neki slučajevi su izostavljeni radi sažetosti)

  1. __libc_malloc pokušava da dobije blok iz tcache-a, ako ne uspe, poziva _int_malloc

  2. _int_malloc :

  3. Pokušava da generiše arenu ako je nema

  4. Ako postoji bilo koji fast bin blok odgovarajuće veličine, koristi ga

  5. Popunjava tcache drugim fast blokovima

  6. Ako postoji bilo koji small bin blok odgovarajuće veličine, koristi ga

  7. Popunjava tcache drugim blokovima te veličine

  8. Ako tražena veličina nije za small binove, konsoliduje fast bin u unsorted bin

  9. Proverava unsorted bin, koristi prvi blok sa dovoljno prostora

  10. Ako je pronađeni blok veći, deli ga da vrati deo i dodaje ostatak nazad u unsorted bin

  11. Ako je blok iste veličine kao tražena veličina, koristi ga da popuni tcache umesto da ga vrati (dok je tcache pun, zatim vraća sledeći)

  12. Za svaki blok manje veličine koji je proveren, stavlja ga u odgovarajući small ili large bin

  13. Proverava large bin na indeksu tražene veličine

  14. Počinje pretragu od prvog bloka koji je veći od tražene veličine, ako je pronađen vraća ga i dodaje ostatke u small bin

  15. Proverava large binove od sledećih indeksa do kraja

  16. Od sledećeg većeg indeksa proverava za bilo koji blok, deli pronađeni blok da ga koristi za traženu veličinu i dodaje ostatak u unsorted bin

  17. Ako ništa nije pronađeno u prethodnim binovima, uzima blok sa vrha

  18. Ako vrhunski blok nije bio dovoljno velik, proširuje ga sa sysmalloc

__libc_malloc

Funkcija malloc zapravo poziva __libc_malloc. Ova funkcija će proveriti tcache da vidi da li postoji dostupan blok željene veličine. Ako postoji, koristiće ga, a ako ne, proveriće da li je jedna nit i u tom slučaju će pozvati _int_malloc u glavnoj areni, a ako nije, pozvaće _int_malloc u areni niti.

Kod __libc_malloc

```c // From https://github.com/bminor/glibc/blob/master/malloc/malloc.c

#if IS_IN (libc) void * __libc_malloc (size_t bytes) { mstate ar_ptr; void *victim;

_Static_assert (PTRDIFF_MAX <= SIZE_MAX / 2, "PTRDIFF_MAX is not more than half of SIZE_MAX");

if (!__malloc_initialized) ptmalloc_init (); #if USE_TCACHE /* int_free also calls request2size, be careful to not pad twice. */ size_t tbytes = checked_request2size (bytes); if (tbytes == 0) { __set_errno (ENOMEM); return NULL; } size_t tc_idx = csize2tidx (tbytes);

MAYBE_INIT_TCACHE ();

DIAG_PUSH_NEEDS_COMMENT; if (tc_idx < mp_.tcache_bins && tcache != NULL && tcache->counts[tc_idx] > 0) { victim = tcache_get (tc_idx); return tag_new_usable (victim); } DIAG_POP_NEEDS_COMMENT; #endif

if (SINGLE_THREAD_P) { victim = tag_new_usable (_int_malloc (&main_arena, bytes)); assert (!victim || chunk_is_mmapped (mem2chunk (victim)) || &main_arena == arena_for_chunk (mem2chunk (victim))); return victim; }

arena_get (ar_ptr, bytes);

victim = _int_malloc (ar_ptr, bytes); /* Retry with another arena only if we were able to find a usable arena before. */ if (!victim && ar_ptr != NULL) { LIBC_PROBE (memory_malloc_retry, 1, bytes); ar_ptr = arena_get_retry (ar_ptr, bytes); victim = _int_malloc (ar_ptr, bytes); }

if (ar_ptr != NULL) __libc_lock_unlock (ar_ptr->mutex);

victim = tag_new_usable (victim);

assert (!victim || chunk_is_mmapped (mem2chunk (victim)) || ar_ptr == arena_for_chunk (mem2chunk (victim))); return victim; }

</details>

Primetite kako će uvek označiti vraćeni pokazivač sa `tag_new_usable`, iz koda:
```c
void *tag_new_usable (void *ptr)

Allocate a new random color and use it to color the user region of
a chunk; this may include data from the subsequent chunk's header
if tagging is sufficiently fine grained.  Returns PTR suitably
recolored for accessing the memory there.

_int_malloc

Ovo je funkcija koja alocira memoriju koristeći druge kante i vrhunski fragment.

  • Početak

Počinje definisanjem nekih promenljivih i dobijanjem stvarne veličine koju zahtevani memorijski prostor treba da ima:

Brzi Bin

Ako je potrebna veličina unutar veličina Brzih Binova, pokušajte koristiti komad iz brzog bina. Osnovno, na osnovu veličine, pronaći će indeks brzog bina gde bi validni komadi trebalo da se nalaze, i ako postoji, vratiće jedan od njih. Osim toga, ako je tcache omogućen, popuniće tcache bin te veličine brzim binovima.

Tokom obavljanja ovih akcija, izvršavaju se neke sigurnosne provere ovde:

  • Ako je komad nepravilno poravnan: malloc(): otkriven nepravilno poravnat brzi bin 2

  • Ako je sledeći komad nepravilno poravnan: malloc(): otkriven nepravilno poravnan brzi bin

  • Ako vraćeni komad ima veličinu koja nije ispravna zbog njegovog indeksa u brzom binu: malloc(): korupcija memorije (brzo)

  • Ako je bilo koji komad korišćen za popunjavanje tcache-a nepravilno poravnan: malloc(): otkriven nepravilno poravnan brzi bin 3

malloc_consolidate

Ako nije bio mali fragment, već je bio veliki fragment, u tom slučaju se poziva malloc_consolidate kako bi se izbegla fragmentacija memorije.

Neuređena kanta

Vreme je da proverimo neuređenu kantu za potencijalno validan blok koji možemo koristiti.

Početak

Ovo počinje velikom petljom koja će prolaziti kroz neuređenu kantu u smeru bk sve dok ne stigne do kraja (strukture arene) sa while ((victim = unsorted_chunks (av)->bk) != unsorted_chunks (av))

Osim toga, vrše se neke sigurnosne provere svaki put kada se razmatra novi blok:

  • Ako je veličina bloka čudna (previše mala ili previše velika): malloc(): invalid size (unsorted)

  • Ako je veličina sledećeg bloka čudna (previše mala ili previše velika): malloc(): invalid next size (unsorted)

  • Ako veličina prethodnog bloka naznačena od strane sledećeg bloka nije ista kao veličina bloka: malloc(): mismatching next->prev_size (unsorted)

  • Ako nije victim->bck->fd == victim ili nije victim->fd == av (arena): malloc(): unsorted double linked list corrupted

  • Pošto uvek proveravamo poslednji, njegov fd bi uvek trebalo da pokazuje ka strukturi arene.

  • Ako sledeći blok ne ukazuje da je prethodni u upotrebi: malloc(): invalid next->prev_inuse (unsorted)

Ako je ovo uspešno, vratite isečak i gotovo, ako nije, nastavite izvršavanje funkcije...

ako je veličina jednaka

Nastavite uklanjanje isečka iz bin-a, u slučaju da je tražena veličina tačno jednaka veličini isečka:

  • Ako tcache nije popunjen, dodajte ga u tcache i nastavite ukazujući da postoji tcache isečak koji se može koristiti

  • Ako je tcache pun, jednostavno ga koristite vraćajući ga

// From https://github.com/bminor/glibc/blob/master/malloc/malloc.c#L4126C11-L4157C14

/* remove from unsorted list */
unsorted_chunks (av)->bk = bck;
bck->fd = unsorted_chunks (av);

/* Take now instead of binning if exact fit */

if (size == nb)
{
set_inuse_bit_at_offset (victim, size);
if (av != &main_arena)
set_non_main_arena (victim);
#if USE_TCACHE
/* Fill cache first, return to user only if cache fills.
We may return one of these chunks later.  */
if (tcache_nb > 0
&& tcache->counts[tc_idx] < mp_.tcache_count)
{
tcache_put (victim, tc_idx);
return_cached = 1;
continue;
}
else
{
#endif
check_malloced_chunk (av, victim, nb);
void *p = chunk2mem (victim);
alloc_perturb (p, bytes);
return p;
#if USE_TCACHE
}
#endif
}

Ako se blok ne vrati ili ne doda u tcache, nastavite s kodom...

smeštanje bloka u bin

Smeštajte provereni blok u mali bin ili u veliki bin u skladu sa veličinom bloka (čuvajući veliki bin pravilno organizovanim).

Vrše se sigurnosne provere kako bi se osiguralo da su oba velika bina dvostruko povezana lista oštećena:

  • Ako fwd->bk_nextsize->fd_nextsize != fwd: malloc(): veliki bin dvostruko povezana lista oštećena (nextsize)

  • Ako fwd->bk->fd != fwd: malloc(): veliki bin dvostruko povezana lista oštećena (bk)

_int_malloc smeštanje bloka u bin

```c /* place chunk in bin */

if (in_smallbin_range (size)) { victim_index = smallbin_index (size); bck = bin_at (av, victim_index); fwd = bck->fd; } else { victim_index = largebin_index (size); bck = bin_at (av, victim_index); fwd = bck->fd;

/* maintain large bins in sorted order / if (fwd != bck) { / Or with inuse bit to speed comparisons / size |= PREV_INUSE; / if smaller than smallest, bypass loop below */ assert (chunk_main_arena (bck->bk)); if ((unsigned long) (size) < (unsigned long) chunksize_nomask (bck->bk)) { fwd = bck; bck = bck->bk;

victim->fd_nextsize = fwd->fd; victim->bk_nextsize = fwd->fd->bk_nextsize; fwd->fd->bk_nextsize = victim->bk_nextsize->fd_nextsize = victim; } else { assert (chunk_main_arena (fwd)); while ((unsigned long) size < chunksize_nomask (fwd)) { fwd = fwd->fd_nextsize; assert (chunk_main_arena (fwd)); }

if ((unsigned long) size == (unsigned long) chunksize_nomask (fwd)) /* Always insert in the second position. */ fwd = fwd->fd; else { victim->fd_nextsize = fwd; victim->bk_nextsize = fwd->bk_nextsize; if (__glibc_unlikely (fwd->bk_nextsize->fd_nextsize != fwd)) malloc_printerr ("malloc(): largebin double linked list corrupted (nextsize)"); fwd->bk_nextsize = victim; victim->bk_nextsize->fd_nextsize = victim; } bck = fwd->bk; if (bck->fd != fwd) malloc_printerr ("malloc(): largebin double linked list corrupted (bk)"); } } else victim->fd_nextsize = victim->bk_nextsize = victim; }

mark_bin (av, victim_index); victim->bk = bck; victim->fd = fwd; fwd->bk = victim; bck->fd = victim;

#### Ograničenja `_int_malloc` funkcije

U ovom trenutku, ako je neki blok smešten u tcache koji može biti korišćen i limit je dostignut, jednostavno **vrati tcache blok**.

Osim toga, ako je dostignut **MAX\_ITERS**, prekini petlju i dobavi blok na drugačiji način (vrh bloka).

Ako je postavljeno `return_cached`, jednostavno vrati blok iz tcache-a kako bi se izbegla veća pretraživanja.
```c
// From https://github.com/bminor/glibc/blob/master/malloc/malloc.c#L4227C1-L4250C7

#if USE_TCACHE
/* If we've processed as many chunks as we're allowed while
filling the cache, return one of the cached ones.  */
++tcache_unsorted_count;
if (return_cached
&& mp_.tcache_unsorted_limit > 0
&& tcache_unsorted_count > mp_.tcache_unsorted_limit)
{
return tcache_get (tc_idx);
}
#endif

#define MAX_ITERS       10000
if (++iters >= MAX_ITERS)
break;
}

#if USE_TCACHE
/* If all the small chunks we found ended up cached, return one now.  */
if (return_cached)
{
return tcache_get (tc_idx);
}
#endif

Ako limiti nisu dostignuti, nastavite sa kodom...

Velika Bin (po indeksu)

Ako je zahtev veliki (nije u maloj bini) i još uvek nismo vratili nijedan blok, dobijemo indeks tražene veličine u velikoj bini, proverimo da li je nije prazan ili da li je najveći blok u ovoj bini veći od tražene veličine i u tom slučaju pronađemo najmanji blok koji može biti korišćen za traženu veličinu.

Ako preostali prostor od konačno korišćenog bloka može biti novi blok, dodajte ga u nesortiranu binu i ažurirajte lsast_reminder.

Vrši se sigurnosna provera prilikom dodavanja podsetnika u nesortiranu binu:

  • bck->fd-> bk != bck: malloc(): corrupted unsorted chunks

_int_malloc Velika bin (po indeksu)

```c // From https://github.com/bminor/glibc/blob/master/malloc/malloc.c#L4252C7-L4317C10

/* If a large request, scan through the chunks of current bin in sorted order to find smallest that fits. Use the skip list for this. */

if (!in_smallbin_range (nb)) { bin = bin_at (av, idx);

/* skip scan if empty or largest chunk is too small */ if ((victim = first (bin)) != bin && (unsigned long) chunksize_nomask (victim)

= (unsigned long) (nb)) { victim = victim->bk_nextsize; while (((unsigned long) (size = chunksize (victim)) < (unsigned long) (nb))) victim = victim->bk_nextsize;

/* Avoid removing the first entry for a size so that the skip list does not have to be rerouted. */ if (victim != last (bin) && chunksize_nomask (victim) == chunksize_nomask (victim->fd)) victim = victim->fd;

remainder_size = size - nb; unlink_chunk (av, victim);

/* Exhaust / if (remainder_size < MINSIZE) { set_inuse_bit_at_offset (victim, size); if (av != &main_arena) set_non_main_arena (victim); } / Split / else { remainder = chunk_at_offset (victim, nb); / We cannot assume the unsorted list is empty and therefore have to perform a complete insert here. */ bck = unsorted_chunks (av); fwd = bck->fd; if (__glibc_unlikely (fwd->bk != bck)) malloc_printerr ("malloc(): corrupted unsorted chunks"); last_re->bk = bck; remainder->fd = fwd; bck->fd = remainder; fwd->bk = remainder; if (!in_smallbin_range (remainder_size)) { remainder->fd_nextsize = NULL; remainder->bk_nextsize = NULL; } set_head (victim, nb | PREV_INUSE | (av != &main_arena ? NON_MAIN_ARENA : 0)); set_head (remainder, remainder_size | PREV_INUSE); set_foot (remainder, remainder_size); } check_malloced_chunk (av, victim, nb); void *p = chunk2mem (victim); alloc_perturb (p, bytes); return p; } }

</details>

Ako se blok ne pronađe pogodan za ovo, nastavite

### Velika korpa (sledeća veća)

Ako u tačnoj velikoj korpi nije pronađen nijedan blok koji se može koristiti, počnite prolaziti kroz sve sledeće velike korpe (počevši od odmah veće) dok se ne pronađe jedan (ako postoji).

Podsetnik podeljenog bloka dodaje se u nesortiranu korpu, poslednji podsetnik se ažurira i vrši se ista sigurnosna provera:

* `bck->fd-> bk != bck`: `malloc(): corrupted unsorted chunks2`

<details>

<summary><code>_int_malloc</code> Velika korpa (sledeća veća)</summary>
```c
// From https://github.com/bminor/glibc/blob/master/malloc/malloc.c#L4319C7-L4425C10

/*
Search for a chunk by scanning bins, starting with next largest
bin. This search is strictly by best-fit; i.e., the smallest
(with ties going to approximately the least recently used) chunk
that fits is selected.

The bitmap avoids needing to check that most blocks are nonempty.
The particular case of skipping all bins during warm-up phases
when no chunks have been returned yet is faster than it might look.
*/

++idx;
bin = bin_at (av, idx);
block = idx2block (idx);
map = av->binmap[block];
bit = idx2bit (idx);

for (;; )
{
/* Skip rest of block if there are no more set bits in this block.  */
if (bit > map || bit == 0)
{
do
{
if (++block >= BINMAPSIZE) /* out of bins */
goto use_top;
}
while ((map = av->binmap[block]) == 0);

bin = bin_at (av, (block << BINMAPSHIFT));
bit = 1;
}

/* Advance to bin with set bit. There must be one. */
while ((bit & map) == 0)
{
bin = next_bin (bin);
bit <<= 1;
assert (bit != 0);
}

/* Inspect the bin. It is likely to be non-empty */
victim = last (bin);

/*  If a false alarm (empty bin), clear the bit. */
if (victim == bin)
{
av->binmap[block] = map &= ~bit; /* Write through */
bin = next_bin (bin);
bit <<= 1;
}

else
{
size = chunksize (victim);

/*  We know the first chunk in this bin is big enough to use. */
assert ((unsigned long) (size) >= (unsigned long) (nb));

remainder_size = size - nb;

/* unlink */
unlink_chunk (av, victim);

/* Exhaust */
if (remainder_size < MINSIZE)
{
set_inuse_bit_at_offset (victim, size);
if (av != &main_arena)
set_non_main_arena (victim);
}

/* Split */
else
{
remainder = chunk_at_offset (victim, nb);

/* We cannot assume the unsorted list is empty and therefore
have to perform a complete insert here.  */
bck = unsorted_chunks (av);
fwd = bck->fd;
if (__glibc_unlikely (fwd->bk != bck))
malloc_printerr ("malloc(): corrupted unsorted chunks 2");
remainder->bk = bck;
remainder->fd = fwd;
bck->fd = remainder;
fwd->bk = remainder;

/* advertise as last remainder */
if (in_smallbin_range (nb))
av->last_remainder = remainder;
if (!in_smallbin_range (remainder_size))
{
remainder->fd_nextsize = NULL;
remainder->bk_nextsize = NULL;
}
set_head (victim, nb | PREV_INUSE |
(av != &main_arena ? NON_MAIN_ARENA : 0));
set_head (remainder, remainder_size | PREV_INUSE);
set_foot (remainder, remainder_size);
}
check_malloced_chunk (av, victim, nb);
void *p = chunk2mem (victim);
alloc_perturb (p, bytes);
return p;
}
}

Vrhunski deo

U ovom trenutku, vreme je da dobijemo novi deo iz Vrhunskog dela (ako je dovoljno velik).

Počinje sa sigurnosnom proverom koja proverava da li je veličina dela prevelika (korumpirana):

  • chunksize(av->top) > av->system_mem: malloc(): corrupted top size

Zatim će koristiti prostor vrhunskog dela ako je dovoljno velik da napravi deo tražene veličine. Ako nije, ako postoje brzi delovi, konsolidovaće ih i pokušati ponovo. Na kraju, ako nema dovoljno prostora, koristiće sysmalloc da alocira dovoljnu veličinu.

_int_malloc Vrhunski deo

```c use_top: /* If large enough, split off the chunk bordering the end of memory (held in av->top). Note that this is in accord with the best-fit search rule. In effect, av->top is treated as larger (and thus less well fitting) than any other available chunk since it can be extended to be as large as necessary (up to system limitations).

We require that av->top always exists (i.e., has size >= MINSIZE) after initialization, so if it would otherwise be exhausted by current request, it is replenished. (The main reason for ensuring it exists is that we may need MINSIZE space to put in fenceposts in sysmalloc.) */

victim = av->top; size = chunksize (victim);

if (__glibc_unlikely (size > av->system_mem)) malloc_printerr ("malloc(): corrupted top size");

if ((unsigned long) (size) >= (unsigned long) (nb + MINSIZE)) { remainder_size = size - nb; remainder = chunk_at_offset (victim, nb); av->top = remainder; set_head (victim, nb | PREV_INUSE | (av != &main_arena ? NON_MAIN_ARENA : 0)); set_head (remainder, remainder_size | PREV_INUSE);

check_malloced_chunk (av, victim, nb); void *p = chunk2mem (victim); alloc_perturb (p, bytes); return p; }

/* When we are using atomic ops to free fast chunks we can get here for all block sizes. / else if (atomic_load_relaxed (&av->have_fastchunks)) { malloc_consolidate (av); / restore original bin index */ if (in_smallbin_range (nb)) idx = smallbin_index (nb); else idx = largebin_index (nb); }

/* Otherwise, relay to handle system-dependent cases */ else { void *p = sysmalloc (nb, av); if (p != NULL) alloc_perturb (p, bytes); return p; } } }

### sysmalloc

### Početak sysmalloc-a

Ako je arena null ili je tražena veličina prevelika (i dozvoljeno je još mape), koristi `sysmalloc_mmap` za alociranje prostora i vraćanje istog.
```c
// From https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L2531

/*
sysmalloc handles malloc cases requiring more memory from the system.
On entry, it is assumed that av->top does not have enough
space to service request for nb bytes, thus requiring that av->top
be extended or replaced.
*/

static void *
sysmalloc (INTERNAL_SIZE_T nb, mstate av)
{
mchunkptr old_top;              /* incoming value of av->top */
INTERNAL_SIZE_T old_size;       /* its size */
char *old_end;                  /* its end address */

long size;                      /* arg to first MORECORE or mmap call */
char *brk;                      /* return value from MORECORE */

long correction;                /* arg to 2nd MORECORE call */
char *snd_brk;                  /* 2nd return val */

INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of new space */
INTERNAL_SIZE_T end_misalign;   /* partial page left at end of new space */
char *aligned_brk;              /* aligned offset into brk */

mchunkptr p;                    /* the allocated/returned chunk */
mchunkptr remainder;            /* remainder from allocation */
unsigned long remainder_size;   /* its size */


size_t pagesize = GLRO (dl_pagesize);
bool tried_mmap = false;


/*
If have mmap, and the request size meets the mmap threshold, and
the system supports mmap, and there are few enough currently
allocated mmapped regions, try to directly map this request
rather than expanding top.
*/

if (av == NULL
|| ((unsigned long) (nb) >= (unsigned long) (mp_.mmap_threshold)
&& (mp_.n_mmaps < mp_.n_mmaps_max)))
{
char *mm;
if (mp_.hp_pagesize > 0 && nb >= mp_.hp_pagesize)
{
/* There is no need to issue the THP madvise call if Huge Pages are
used directly.  */
mm = sysmalloc_mmap (nb, mp_.hp_pagesize, mp_.hp_flags, av);
if (mm != MAP_FAILED)
return mm;
}
mm = sysmalloc_mmap (nb, pagesize, 0, av);
if (mm != MAP_FAILED)
return mm;
tried_mmap = true;
}

/* There are no usable arenas and mmap also failed.  */
if (av == NULL)
return 0;

Provere sysmalloc-a

Počinje dobijanjem informacija o starom vrhu bloka i proverom da li su neki od sledećih uslova tačni:

  • Stara veličina hipa je 0 (novi hip)

  • Veličina prethodnog hipa je veća od MINSIZE-a i stari Vrh je u upotrebi

  • Hip je poravnan na veličinu stranice (0x1000 tako da donjih 12 bitova mora biti 0)

Zatim se takođe proverava da:

  • Stara veličina nema dovoljno prostora da se napravi blok za traženu veličinu

Provere sysmalloc-a

```c /* Record incoming configuration of top */

old_top = av->top; old_size = chunksize (old_top); old_end = (char *) (chunk_at_offset (old_top, old_size));

brk = snd_brk = (char *) (MORECORE_FAILURE);

/* If not the first time through, we require old_size to be at least MINSIZE and to have prev_inuse set. */

assert ((old_top == initial_top (av) && old_size == 0) || ((unsigned long) (old_size) >= MINSIZE && prev_inuse (old_top) && ((unsigned long) old_end & (pagesize - 1)) == 0));

/* Precondition: not enough current space to satisfy nb request */ assert ((unsigned long) (old_size) < (unsigned long) (nb + MINSIZE));

</details>

### sysmalloc nije glavna arena

Prvo će pokušati da **proširi** prethodnu hrpu za ovu hrpu. Ako to nije moguće, pokušaće da **dodeli novu hrpu** i ažurira pokazivače kako bi je mogao koristiti.\
Na kraju, ako to ne uspe, pokušaće pozvati **`sysmalloc_mmap`**.&#x20;

<details>

<summary>sysmalloc nije glavna arena</summary>
```c
if (av != &main_arena)
{
heap_info *old_heap, *heap;
size_t old_heap_size;

/* First try to extend the current heap. */
old_heap = heap_for_ptr (old_top);
old_heap_size = old_heap->size;
if ((long) (MINSIZE + nb - old_size) > 0
&& grow_heap (old_heap, MINSIZE + nb - old_size) == 0)
{
av->system_mem += old_heap->size - old_heap_size;
set_head (old_top, (((char *) old_heap + old_heap->size) - (char *) old_top)
| PREV_INUSE);
}
else if ((heap = new_heap (nb + (MINSIZE + sizeof (*heap)), mp_.top_pad)))
{
/* Use a newly allocated heap.  */
heap->ar_ptr = av;
heap->prev = old_heap;
av->system_mem += heap->size;
/* Set up the new top.  */
top (av) = chunk_at_offset (heap, sizeof (*heap));
set_head (top (av), (heap->size - sizeof (*heap)) | PREV_INUSE);

/* Setup fencepost and free the old top chunk with a multiple of
MALLOC_ALIGNMENT in size. */
/* The fencepost takes at least MINSIZE bytes, because it might
become the top chunk again later.  Note that a footer is set
up, too, although the chunk is marked in use. */
old_size = (old_size - MINSIZE) & ~MALLOC_ALIGN_MASK;
set_head (chunk_at_offset (old_top, old_size + CHUNK_HDR_SZ),
0 | PREV_INUSE);
if (old_size >= MINSIZE)
{
set_head (chunk_at_offset (old_top, old_size),
CHUNK_HDR_SZ | PREV_INUSE);
set_foot (chunk_at_offset (old_top, old_size), CHUNK_HDR_SZ);
set_head (old_top, old_size | PREV_INUSE | NON_MAIN_ARENA);
_int_free (av, old_top, 1);
}
else
{
set_head (old_top, (old_size + CHUNK_HDR_SZ) | PREV_INUSE);
set_foot (old_top, (old_size + CHUNK_HDR_SZ));
}
}
else if (!tried_mmap)
{
/* We can at least try to use to mmap memory.  If new_heap fails
it is unlikely that trying to allocate huge pages will
succeed.  */
char *mm = sysmalloc_mmap (nb, pagesize, 0, av);
if (mm != MAP_FAILED)
return mm;
}
}

sysmalloc glavna arena

Počinje računanjem potrebne količine memorije. Počeće traženjem kontinualne memorije tako da u ovom slučaju bude moguće koristiti staru memoriju koja nije korišćena. Takođe se vrše neke operacije poravnanja.

// From https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L2665C1-L2713C10

else     /* av == main_arena */


{ /* Request enough space for nb + pad + overhead */
size = nb + mp_.top_pad + MINSIZE;

/*
If contiguous, we can subtract out existing space that we hope to
combine with new space. We add it back later only if
we don't actually get contiguous space.
*/

if (contiguous (av))
size -= old_size;

/*
Round to a multiple of page size or huge page size.
If MORECORE is not contiguous, this ensures that we only call it
with whole-page arguments.  And if MORECORE is contiguous and
this is not first time through, this preserves page-alignment of
previous calls. Otherwise, we correct to page-align below.
*/

#ifdef MADV_HUGEPAGE
/* Defined in brk.c.  */
extern void *__curbrk;
if (__glibc_unlikely (mp_.thp_pagesize != 0))
{
uintptr_t top = ALIGN_UP ((uintptr_t) __curbrk + size,
mp_.thp_pagesize);
size = top - (uintptr_t) __curbrk;
}
else
#endif
size = ALIGN_UP (size, GLRO(dl_pagesize));

/*
Don't try to call MORECORE if argument is so big as to appear
negative. Note that since mmap takes size_t arg, it may succeed
below even if we cannot call MORECORE.
*/

if (size > 0)
{
brk = (char *) (MORECORE (size));
if (brk != (char *) (MORECORE_FAILURE))
madvise_thp (brk, size);
LIBC_PROBE (memory_sbrk_more, 2, brk, size);
}

sysmalloc glavna arena prethodna greška 1

Ako je prethodno vraćeno MORECORE_FAILURE, pokušajte ponovo da alocirate memoriju koristeći sysmalloc_mmap_fallback

// From https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L2715C7-L2740C10

if (brk == (char *) (MORECORE_FAILURE))
{
/*
If have mmap, try using it as a backup when MORECORE fails or
cannot be used. This is worth doing on systems that have "holes" in
address space, so sbrk cannot extend to give contiguous space, but
space is available elsewhere.  Note that we ignore mmap max count
and threshold limits, since the space will not be used as a
segregated mmap region.
*/

char *mbrk = MAP_FAILED;
if (mp_.hp_pagesize > 0)
mbrk = sysmalloc_mmap_fallback (&size, nb, old_size,
mp_.hp_pagesize, mp_.hp_pagesize,
mp_.hp_flags, av);
if (mbrk == MAP_FAILED)
mbrk = sysmalloc_mmap_fallback (&size, nb, old_size, MMAP_AS_MORECORE_SIZE,
pagesize, 0, av);
if (mbrk != MAP_FAILED)
{
/* We do not need, and cannot use, another sbrk call to find end */
brk = mbrk;
snd_brk = brk + size;
}
}

sysmalloc glavna arena nastavak

Ako prethodno nije vratilo MORECORE_FAILURE, ako je radilo, napravite neka poravnanja:

// From https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L2742

if (brk != (char *) (MORECORE_FAILURE))
{
if (mp_.sbrk_base == 0)
mp_.sbrk_base = brk;
av->system_mem += size;

/*
If MORECORE extends previous space, we can likewise extend top size.
*/

if (brk == old_end && snd_brk == (char *) (MORECORE_FAILURE))
set_head (old_top, (size + old_size) | PREV_INUSE);

else if (contiguous (av) && old_size && brk < old_end)
/* Oops!  Someone else killed our space..  Can't touch anything.  */
malloc_printerr ("break adjusted to free malloc space");

/*
Otherwise, make adjustments:

* If the first time through or noncontiguous, we need to call sbrk
just to find out where the end of memory lies.

* We need to ensure that all returned chunks from malloc will meet
MALLOC_ALIGNMENT

* If there was an intervening foreign sbrk, we need to adjust sbrk
request size to account for fact that we will not be able to
combine new space with existing space in old_top.

* Almost all systems internally allocate whole pages at a time, in
which case we might as well use the whole last page of request.
So we allocate enough more memory to hit a page boundary now,
which in turn causes future contiguous calls to page-align.
*/

else
{
front_misalign = 0;
end_misalign = 0;
correction = 0;
aligned_brk = brk;

/* handle contiguous cases */
if (contiguous (av))
{
/* Count foreign sbrk as system_mem.  */
if (old_size)
av->system_mem += brk - old_end;

/* Guarantee alignment of first new chunk made from this space */

front_misalign = (INTERNAL_SIZE_T) chunk2mem (brk) & MALLOC_ALIGN_MASK;
if (front_misalign > 0)
{
/*
Skip over some bytes to arrive at an aligned position.
We don't need to specially mark these wasted front bytes.
They will never be accessed anyway because
prev_inuse of av->top (and any chunk created from its start)
is always true after initialization.
*/

correction = MALLOC_ALIGNMENT - front_misalign;
aligned_brk += correction;
}

/*
If this isn't adjacent to existing space, then we will not
be able to merge with old_top space, so must add to 2nd request.
*/

correction += old_size;

/* Extend the end address to hit a page boundary */
end_misalign = (INTERNAL_SIZE_T) (brk + size + correction);
correction += (ALIGN_UP (end_misalign, pagesize)) - end_misalign;

assert (correction >= 0);
snd_brk = (char *) (MORECORE (correction));

/*
If can't allocate correction, try to at least find out current
brk.  It might be enough to proceed without failing.

Note that if second sbrk did NOT fail, we assume that space
is contiguous with first sbrk. This is a safe assumption unless
program is multithreaded but doesn't use locks and a foreign sbrk
occurred between our first and second calls.
*/

if (snd_brk == (char *) (MORECORE_FAILURE))
{
correction = 0;
snd_brk = (char *) (MORECORE (0));
}
else
madvise_thp (snd_brk, correction);
}

/* handle non-contiguous cases */
else
{
if (MALLOC_ALIGNMENT == CHUNK_HDR_SZ)
/* MORECORE/mmap must correctly align */
assert (((unsigned long) chunk2mem (brk) & MALLOC_ALIGN_MASK) == 0);
else
{
front_misalign = (INTERNAL_SIZE_T) chunk2mem (brk) & MALLOC_ALIGN_MASK;
if (front_misalign > 0)
{
/*
Skip over some bytes to arrive at an aligned position.
We don't need to specially mark these wasted front bytes.
They will never be accessed anyway because
prev_inuse of av->top (and any chunk created from its start)
is always true after initialization.
*/

aligned_brk += MALLOC_ALIGNMENT - front_misalign;
}
}

/* Find out current end of memory */
if (snd_brk == (char *) (MORECORE_FAILURE))
{
snd_brk = (char *) (MORECORE (0));
}
}

/* Adjust top based on results of second sbrk */
if (snd_brk != (char *) (MORECORE_FAILURE))
{
av->top = (mchunkptr) aligned_brk;
set_head (av->top, (snd_brk - aligned_brk + correction) | PREV_INUSE);
av->system_mem += correction;

/*
If not the first time through, we either have a
gap due to foreign sbrk or a non-contiguous region.  Insert a
double fencepost at old_top to prevent consolidation with space
we don't own. These fenceposts are artificial chunks that are
marked as inuse and are in any case too small to use.  We need
two to make sizes and alignments work out.
*/

if (old_size != 0)
{
/*
Shrink old_top to insert fenceposts, keeping size a
multiple of MALLOC_ALIGNMENT. We know there is at least
enough space in old_top to do this.
*/
old_size = (old_size - 2 * CHUNK_HDR_SZ) & ~MALLOC_ALIGN_MASK;
set_head (old_top, old_size | PREV_INUSE);

/*
Note that the following assignments completely overwrite
old_top when old_size was previously MINSIZE.  This is
intentional. We need the fencepost, even if old_top otherwise gets
lost.
*/
set_head (chunk_at_offset (old_top, old_size),
CHUNK_HDR_SZ | PREV_INUSE);
set_head (chunk_at_offset (old_top,
old_size + CHUNK_HDR_SZ),
CHUNK_HDR_SZ | PREV_INUSE);

/* If possible, release the rest. */
if (old_size >= MINSIZE)
{
_int_free (av, old_top, 1);
}
}
}
}
}
} /* if (av !=  &main_arena) */

sysmalloc finale

Završite alokaciju ažuriranjem informacija o areni

// From https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L2921C3-L2943C12

if ((unsigned long) av->system_mem > (unsigned long) (av->max_system_mem))
av->max_system_mem = av->system_mem;
check_malloc_state (av);

/* finally, do the allocation */
p = av->top;
size = chunksize (p);

/* check that one of the above allocation paths succeeded */
if ((unsigned long) (size) >= (unsigned long) (nb + MINSIZE))
{
remainder_size = size - nb;
remainder = chunk_at_offset (p, nb);
av->top = remainder;
set_head (p, nb | PREV_INUSE | (av != &main_arena ? NON_MAIN_ARENA : 0));
set_head (remainder, remainder_size | PREV_INUSE);
check_malloced_chunk (av, p, nb);
return chunk2mem (p);
}

/* catch all failure paths */
__set_errno (ENOMEM);
return 0;

sysmalloc_mmap

Kod sysmalloc_mmap

```c // From https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L2392C1-L2481C2

static void * sysmalloc_mmap (INTERNAL_SIZE_T nb, size_t pagesize, int extra_flags, mstate av) { long int size;

/* Round up size to nearest page. For mmapped chunks, the overhead is one SIZE_SZ unit larger than for normal chunks, because there is no following chunk whose prev_size field could be used.

See the front_misalign handling below, for glibc there is no need for further alignments unless we have have high alignment. */ if (MALLOC_ALIGNMENT == CHUNK_HDR_SZ) size = ALIGN_UP (nb + SIZE_SZ, pagesize); else size = ALIGN_UP (nb + SIZE_SZ + MALLOC_ALIGN_MASK, pagesize);

/* Don't try if size wraps around 0. */ if ((unsigned long) (size) <= (unsigned long) (nb)) return MAP_FAILED;

char *mm = (char *) MMAP (0, size, mtag_mmap_flags | PROT_READ | PROT_WRITE, extra_flags); if (mm == MAP_FAILED) return mm;

#ifdef MAP_HUGETLB if (!(extra_flags & MAP_HUGETLB)) madvise_thp (mm, size); #endif

__set_vma_name (mm, size, " glibc: malloc");

/* The offset to the start of the mmapped region is stored in the prev_size field of the chunk. This allows us to adjust returned start address to meet alignment requirements here and in memalign(), and still be able to compute proper address argument for later munmap in free() and realloc(). */

INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of new space */

if (MALLOC_ALIGNMENT == CHUNK_HDR_SZ) { /* For glibc, chunk2mem increases the address by CHUNK_HDR_SZ and MALLOC_ALIGN_MASK is CHUNK_HDR_SZ-1. Each mmap'ed area is page aligned and therefore definitely MALLOC_ALIGN_MASK-aligned. */ assert (((INTERNAL_SIZE_T) chunk2mem (mm) & MALLOC_ALIGN_MASK) == 0); front_misalign = 0; } else front_misalign = (INTERNAL_SIZE_T) chunk2mem (mm) & MALLOC_ALIGN_MASK;

mchunkptr p; /* the allocated/returned chunk */

if (front_misalign > 0) { ptrdiff_t correction = MALLOC_ALIGNMENT - front_misalign; p = (mchunkptr) (mm + correction); set_prev_size (p, correction); set_head (p, (size - correction) | IS_MMAPPED); } else { p = (mchunkptr) mm; set_prev_size (p, 0); set_head (p, size | IS_MMAPPED); }

/* update statistics */ int new = atomic_fetch_add_relaxed (&mp_.n_mmaps, 1) + 1; atomic_max (&mp_.max_n_mmaps, new);

unsigned long sum; sum = atomic_fetch_add_relaxed (&mp_.mmapped_mem, size) + size; atomic_max (&mp_.max_mmapped_mem, sum);

check_chunk (av, p);

return chunk2mem (p); }

</details>

<div data-gb-custom-block data-tag="hint" data-style='success'>

Učite i vežbajte hakovanje AWS-a: <img src="/.gitbook/assets/arte.png" alt="" data-size="line">[**HackTricks obuka AWS Crveni Tim Ekspert (ARTE)**](https://training.hacktricks.xyz/courses/arte)<img src="/.gitbook/assets/arte.png" alt="" data-size="line">\
Učite i vežbajte hakovanje GCP-a: <img src="/.gitbook/assets/grte.png" alt="" data-size="line">[**HackTricks obuka GCP Crveni Tim Ekspert (GRTE)**<img src="/.gitbook/assets/grte.png" alt="" data-size="line">](https://training.hacktricks.xyz/courses/grte)

<details>

<summary>Podržite HackTricks</summary>

* Proverite [**planove pretplate**](https://github.com/sponsors/carlospolop)!
* **Pridružite se** 💬 [**Discord grupi**](https://discord.gg/hRep4RUj7f) ili [**telegram grupi**](https://t.me/peass) ili nas **pratite** na **Twitteru** 🐦 [**@hacktricks\_live**](https://twitter.com/hacktricks\_live)**.**
* **Podelite hakovanje trikova slanjem PR-ova na** [**HackTricks**](https://github.com/carlospolop/hacktricks) i [**HackTricks Cloud**](https://github.com/carlospolop/hacktricks-cloud) github repozitorijume.

</details>

</div>

Last updated