malloc & sysmalloc

Ondersteun HackTricks

Toewysingsvolgorde Opsomming

(Geen kontroles word in hierdie opsomming verduidelik en sommige gevalle is weggelaat vir bondigheid)

  1. __libc_malloc probeer om 'n stukkie uit die tcache te kry, indien nie, roep dit _int_malloc aan

  2. _int_malloc :

  3. Probeer om die arena te genereer indien daar nie een is nie

  4. Indien enige vinnige bin-stuk van die korrekte grootte, gebruik dit

  5. Vul tcache met ander vinnige stukkies

  6. Indien enige klein bin-stuk van die korrekte grootte, gebruik dit

  7. Vul tcache met ander stukkies van daardie grootte

  8. Indien die gevraagde grootte nie vir klein bakkies is nie, konsolideer vinnige bin in ongesorteerde bin

  9. Kontroleer die ongesorteerde bin, gebruik die eerste stuk met genoeg spasie

  10. Indien die gevonde stuk groter is, verdeel dit om 'n deel terug te gee en die herinnering terug te voeg na die ongesorteerde bin

  11. Indien 'n stuk van dieselfde grootte as die gevraagde grootte is, gebruik dit om die tcache te vul in plaas van om dit terug te gee (tot die tcache vol is, gee dan die volgende een terug)

  12. Vir elke stuk van kleiner grootte wat nagegaan is, plaas dit in die betrokke klein of groot bin

  13. Kontroleer die groot bin in die indeks van die gevraagde grootte

  14. Begin soek vanaf die eerste stuk wat groter is as die gevraagde grootte, indien enige gevind word, gee dit terug en voeg die herinnerings by die klein bin

  15. Kontroleer die groot bakkies vanaf die volgende indekse tot die einde

  16. Vanaf die volgende groter indeks, soek vir enige stuk, verdeel die eerste gevonde stuk om dit te gebruik vir die gevraagde grootte en voeg die herinnering by die ongesorteerde bin

  17. Indien niks gevind word in die vorige bakkies nie, kry 'n stukkie van die boonste stuk

  18. Indien die boonste stuk nie groot genoeg was nie, vergroot dit met sysmalloc

__libc_malloc

Die malloc-funksie roep eintlik __libc_malloc aan. Hierdie funksie sal die tcache nagaan om te sien of daar enige beskikbare stuk van die gewenste grootte is. Indien dit wel is, sal dit dit gebruik en indien nie, sal dit nagaan of dit 'n enkele draad is en in daardie geval sal dit _int_malloc in die hoof-arena aanroep, en indien nie, sal dit _int_malloc in die arena van die draad aanroep.

__libc_malloc-kode

```c // From https://github.com/bminor/glibc/blob/master/malloc/malloc.c

#if IS_IN (libc) void * __libc_malloc (size_t bytes) { mstate ar_ptr; void *victim;

_Static_assert (PTRDIFF_MAX <= SIZE_MAX / 2, "PTRDIFF_MAX is not more than half of SIZE_MAX");

if (!__malloc_initialized) ptmalloc_init (); #if USE_TCACHE /* int_free also calls request2size, be careful to not pad twice. */ size_t tbytes = checked_request2size (bytes); if (tbytes == 0) { __set_errno (ENOMEM); return NULL; } size_t tc_idx = csize2tidx (tbytes);

MAYBE_INIT_TCACHE ();

DIAG_PUSH_NEEDS_COMMENT; if (tc_idx < mp_.tcache_bins && tcache != NULL && tcache->counts[tc_idx] > 0) { victim = tcache_get (tc_idx); return tag_new_usable (victim); } DIAG_POP_NEEDS_COMMENT; #endif

if (SINGLE_THREAD_P) { victim = tag_new_usable (_int_malloc (&main_arena, bytes)); assert (!victim || chunk_is_mmapped (mem2chunk (victim)) || &main_arena == arena_for_chunk (mem2chunk (victim))); return victim; }

arena_get (ar_ptr, bytes);

victim = _int_malloc (ar_ptr, bytes); /* Retry with another arena only if we were able to find a usable arena before. */ if (!victim && ar_ptr != NULL) { LIBC_PROBE (memory_malloc_retry, 1, bytes); ar_ptr = arena_get_retry (ar_ptr, bytes); victim = _int_malloc (ar_ptr, bytes); }

if (ar_ptr != NULL) __libc_lock_unlock (ar_ptr->mutex);

victim = tag_new_usable (victim);

assert (!victim || chunk_is_mmapped (mem2chunk (victim)) || ar_ptr == arena_for_chunk (mem2chunk (victim))); return victim; }

</details>

Merk op hoe dit altyd die teruggekeerde wyser sal merk met `tag_new_usable`, van die kode:
```c
void *tag_new_usable (void *ptr)

Allocate a new random color and use it to color the user region of
a chunk; this may include data from the subsequent chunk's header
if tagging is sufficiently fine grained.  Returns PTR suitably
recolored for accessing the memory there.

_int_malloc

Dit is die funksie wat geheue toewys deur die ander bins en top chunk te gebruik.

  • Begin

Dit begin deur sommige vars te definieer en die werklike grootte te kry wat die aangevraagde geheueplek moet hê:

Vinnige Bin

Indien die benodigde grootte binne die Vinnige Bin-groottes val, probeer om 'n stukkie uit die vinnige bin te gebruik. Basies, gebaseer op die grootte, sal dit die vinnige bin-indeks vind waar geldige stukkies geleë behoort te wees, en indien enige, sal dit een van daardie teruggee. Verder, indien tcache geaktiveer is, sal dit die tcache-bin van daardie grootte met vinnige bins vul.

Terwyl hierdie aksies uitgevoer word, word daar sekuriteitskontroles hier uitgevoer:

  • Indien die stukkie nie belyn is nie: malloc(): ongebalanseerde vinnige bin-stukkie opgespoor 2

  • Indien die voorwaartse stukkie nie belyn is nie: malloc(): ongebalanseerde vinnige bin-stukkie opgespoor

  • Indien die teruggekeerde stukkie 'n grootte het wat nie korrek is nie as gevolg van sy indeks in die vinnige bin: malloc(): geheuekorrupsie (vinnig)

  • Indien enige stukkie wat gebruik word om die tcache te vul, nie belyn is nie: malloc(): ongebalanseerde vinnige bin-stukkie opgespoor 3

malloc_konsolideer

Indien dit nie 'n klein brokkie was nie, is dit 'n groot brokkie, en in hierdie geval word malloc_consolidate geroep om geheuefragmentasie te voorkom.

/*
If this is a large request, consolidate fastbins before continuing.
While it might look excessive to kill all fastbins before
even seeing if there is space available, this avoids
fragmentation problems normally associated with fastbins.
Also, in practice, programs tend to have runs of either small or
large requests, but less often mixtures, so consolidation is not
invoked all that often in most programs. And the programs that
it is called frequently in otherwise tend to fragment.
*/

else
{
idx = largebin_index (nb);
if (atomic_load_relaxed (&av->have_fastchunks))
malloc_consolidate (av);
}

Die malloc konsolideer funksie verwyder basies brokkies uit die vinnige bin en plaas hulle in die ongesorteerde bin. Na die volgende malloc sal hierdie brokkies georganiseer wees in hul onderskeie klein/vinnige bakkies.

Let daarop dat as daar tydens die verwydering van hierdie brokkies gevind word met vorige of volgende brokkies wat nie in gebruik is nie, sal hulle ontkoppel en saamgevoeg word voordat die finale brokkie in die ongesorteerde bin geplaas word.

Vir elke vinnige bin brokkie word 'n paar sekuriteitskontroles uitgevoer:

  • As die brokkie ongeallieerd is, veroorsaak dit: malloc_consolidate(): ongeallieerde vinnige bin brokkie opgespoor

  • As die brokkie 'n ander grootte het as die een wat dit behoort te hê as gevolg van die indeks waarin dit is: malloc_consolidate(): ongeldige brokkie grootte

  • As die vorige brokkie nie in gebruik is nie en die vorige brokkie 'n grootte het wat verskil van die een wat aangedui word deur prev_chunk: beskadigde grootte teenoor vorige_grootte in vinnige bakkies

malloc_consolidate funksie

```c // https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L4810C1-L4905C2

static void malloc_consolidate(mstate av) { mfastbinptr* fb; /* current fastbin being consolidated / mfastbinptr maxfb; /* last fastbin (for loop control) / mchunkptr p; / current chunk being consolidated / mchunkptr nextp; / next chunk to consolidate / mchunkptr unsorted_bin; / bin header / mchunkptr first_unsorted; / chunk to link to */

/* These have same use as in free() */ mchunkptr nextchunk; INTERNAL_SIZE_T size; INTERNAL_SIZE_T nextsize; INTERNAL_SIZE_T prevsize; int nextinuse;

atomic_store_relaxed (&av->have_fastchunks, false);

unsorted_bin = unsorted_chunks(av);

/* Remove each chunk from fast bin and consolidate it, placing it then in unsorted bin. Among other reasons for doing this, placing in unsorted bin avoids needing to calculate actual bins until malloc is sure that chunks aren't immediately going to be reused anyway. */

maxfb = &fastbin (av, NFASTBINS - 1); fb = &fastbin (av, 0); do { p = atomic_exchange_acquire (fb, NULL); if (p != 0) { do { { if (__glibc_unlikely (misaligned_chunk (p))) malloc_printerr ("malloc_consolidate(): " "unaligned fastbin chunk detected");

unsigned int idx = fastbin_index (chunksize (p)); if ((&fastbin (av, idx)) != fb) malloc_printerr ("malloc_consolidate(): invalid chunk size"); }

check_inuse_chunk(av, p); nextp = REVEAL_PTR (p->fd);

/* Slightly streamlined version of consolidation code in free() */ size = chunksize (p); nextchunk = chunk_at_offset(p, size); nextsize = chunksize(nextchunk);

if (!prev_inuse(p)) { prevsize = prev_size (p); size += prevsize; p = chunk_at_offset(p, -((long) prevsize)); if (__glibc_unlikely (chunksize(p) != prevsize)) malloc_printerr ("corrupted size vs. prev_size in fastbins"); unlink_chunk (av, p); }

if (nextchunk != av->top) { nextinuse = inuse_bit_at_offset(nextchunk, nextsize);

if (!nextinuse) { size += nextsize; unlink_chunk (av, nextchunk); } else clear_inuse_bit_at_offset(nextchunk, 0);

first_unsorted = unsorted_bin->fd; unsorted_bin->fd = p; first_unsorted->bk = p;

if (!in_smallbin_range (size)) { p->fd_nextsize = NULL; p->bk_nextsize = NULL; }

set_head(p, size | PREV_INUSE); p->bk = unsorted_bin; p->fd = first_unsorted; set_foot(p, size); }

else { size += nextsize; set_head(p, size | PREV_INUSE); av->top = p; }

} while ( (p = nextp) != 0);

} } while (fb++ != maxfb); }

</details>

### Ongeordende bin

Dit is tyd om die ongeordende bin te ondersoek vir 'n potensiële geldige stuk om te gebruik.

#### Begin

Dit begin met 'n groot for-lus wat die ongeordende bin in die `bk` rigting sal deurloop totdat dit by die einde (die arena struktuur) kom met `while ((victim = unsorted_chunks (av)->bk) != unsorted_chunks (av))`&#x20;

Daarbenewens word daar elke keer as 'n nuwe stuk oorweeg word, sekuriteitskontroles uitgevoer:

* As die stukgrootte vreemd is (te klein of te groot): `malloc(): ongeldige grootte (ongeordend)`
* As die volgende stukgrootte vreemd is (te klein of te groot): `malloc(): ongeldige volgende grootte (ongeordend)`
* As die vorige grootte aangedui deur die volgende stuk verskil van die grootte van die stuk: `malloc(): nie ooreenstemmende next->prev_size (ongeordend)`
* As nie `victim->bck->fd == victim` of nie `victim->fd == av` (arena) nie: `malloc(): ongeordende dubbelgekoppelde lys gekorruptheid`
* Aangesien ons altyd die laaste een nagaan, moet sy `fd` altyd na die arena struktuur wys.
* As die volgende stuk nie aandui dat die vorige in gebruik is nie: `malloc(): ongeldige next->prev_inuse (ongeordend)`

<details>

<summary><code>_int_malloc</code> ongeordende bin begin</summary>
```c
/*
Process recently freed or remaindered chunks, taking one only if
it is exact fit, or, if this a small request, the chunk is remainder from
the most recent non-exact fit.  Place other traversed chunks in
bins.  Note that this step is the only place in any routine where
chunks are placed in bins.

The outer loop here is needed because we might not realize until
near the end of malloc that we should have consolidated, so must
do so and retry. This happens at most once, and only when we would
otherwise need to expand memory to service a "small" request.
*/

#if USE_TCACHE
INTERNAL_SIZE_T tcache_nb = 0;
size_t tc_idx = csize2tidx (nb);
if (tcache != NULL && tc_idx < mp_.tcache_bins)
tcache_nb = nb;
int return_cached = 0;

tcache_unsorted_count = 0;
#endif

for (;; )
{
int iters = 0;
while ((victim = unsorted_chunks (av)->bk) != unsorted_chunks (av))
{
bck = victim->bk;
size = chunksize (victim);
mchunkptr next = chunk_at_offset (victim, size);

if (__glibc_unlikely (size <= CHUNK_HDR_SZ)
|| __glibc_unlikely (size > av->system_mem))
malloc_printerr ("malloc(): invalid size (unsorted)");
if (__glibc_unlikely (chunksize_nomask (next) < CHUNK_HDR_SZ)
|| __glibc_unlikely (chunksize_nomask (next) > av->system_mem))
malloc_printerr ("malloc(): invalid next size (unsorted)");
if (__glibc_unlikely ((prev_size (next) & ~(SIZE_BITS)) != size))
malloc_printerr ("malloc(): mismatching next->prev_size (unsorted)");
if (__glibc_unlikely (bck->fd != victim)
|| __glibc_unlikely (victim->fd != unsorted_chunks (av)))
malloc_printerr ("malloc(): unsorted double linked list corrupted");
if (__glibc_unlikely (prev_inuse (next)))
malloc_printerr ("malloc(): invalid next->prev_inuse (unsorted)");

as in_smallbin_range

Indien die brokkie groter is as die gevraagde grootte, gebruik dit, en stel die res van die brokkie spasie in die ongesorteerde lys en werk die last_remainder daarmee op.

_int_malloc ongesorteerde bin in_smallbin_range

```c // From https://github.com/bminor/glibc/blob/master/malloc/malloc.c#L4090C11-L4124C14

/* If a small request, try to use last remainder if it is the only chunk in unsorted bin. This helps promote locality for runs of consecutive small requests. This is the only exception to best-fit, and applies only when there is no exact fit for a small chunk. */

if (in_smallbin_range (nb) && bck == unsorted_chunks (av) && victim == av->last_remainder && (unsigned long) (size) > (unsigned long) (nb + MINSIZE)) { /* split and reattach remainder */ remainder_size = size - nb; remainder = chunk_at_offset (victim, nb); unsorted_chunks (av)->bk = unsorted_chunks (av)->fd = remainder; av->last_remainder = remainder; remainder->bk = remainder->fd = unsorted_chunks (av); if (!in_smallbin_range (remainder_size)) { remainder->fd_nextsize = NULL; remainder->bk_nextsize = NULL; }

set_head (victim, nb | PREV_INUSE | (av != &main_arena ? NON_MAIN_ARENA : 0)); set_head (remainder, remainder_size | PREV_INUSE); set_foot (remainder, remainder_size);

check_malloced_chunk (av, victim, nb); void *p = chunk2mem (victim); alloc_perturb (p, bytes); return p; }

</details>

Indien dit suksesvol was, gee die stuk terug en dis klaar, indien nie, gaan voort met die uitvoer van die funksie...

#### indien gelyke grootte

Gaan voort met die verwydering van die stuk uit die bin, in die geval waar die versoekte grootte presies dieselfde is as die grootte van die stuk:

* Indien die tcache nie vol is nie, voeg dit by die tcache en gaan voort om aan te dui dat daar 'n tcache stuk is wat gebruik kan word
* Indien die tcache vol is, gebruik dit net deur dit terug te gee

<details>

<summary><code>_int_malloc</code> ongesorteerde bin gelyke grootte</summary>
```c
// From https://github.com/bminor/glibc/blob/master/malloc/malloc.c#L4126C11-L4157C14

/* remove from unsorted list */
unsorted_chunks (av)->bk = bck;
bck->fd = unsorted_chunks (av);

/* Take now instead of binning if exact fit */

if (size == nb)
{
set_inuse_bit_at_offset (victim, size);
if (av != &main_arena)
set_non_main_arena (victim);
#if USE_TCACHE
/* Fill cache first, return to user only if cache fills.
We may return one of these chunks later.  */
if (tcache_nb > 0
&& tcache->counts[tc_idx] < mp_.tcache_count)
{
tcache_put (victim, tc_idx);
return_cached = 1;
continue;
}
else
{
#endif
check_malloced_chunk (av, victim, nb);
void *p = chunk2mem (victim);
alloc_perturb (p, bytes);
return p;
#if USE_TCACHE
}
#endif
}

Indien blok nie teruggegee of by tcache gevoeg is nie, gaan voort met die kode...

plaas blok in 'n bin

Stoor die nagegaan blok in die klein bin of in die groot bin volgens die grootte van die blok (hou die groot bin behoorlik georganiseer).

Daar word sekuriteitskontroles uitgevoer om te verseker dat beide groot bin dubbel gekoppelde lys nie gekorrupteer is nie:

  • Indien fwd->bk_nextsize->fd_nextsize != fwd: malloc(): largebin double linked list corrupted (nextsize)

  • Indien fwd->bk->fd != fwd: malloc(): largebin double linked list corrupted (bk)

_int_malloc plaas blok in 'n bin

```c /* place chunk in bin */

if (in_smallbin_range (size)) { victim_index = smallbin_index (size); bck = bin_at (av, victim_index); fwd = bck->fd; } else { victim_index = largebin_index (size); bck = bin_at (av, victim_index); fwd = bck->fd;

/* maintain large bins in sorted order / if (fwd != bck) { / Or with inuse bit to speed comparisons / size |= PREV_INUSE; / if smaller than smallest, bypass loop below */ assert (chunk_main_arena (bck->bk)); if ((unsigned long) (size) < (unsigned long) chunksize_nomask (bck->bk)) { fwd = bck; bck = bck->bk;

victim->fd_nextsize = fwd->fd; victim->bk_nextsize = fwd->fd->bk_nextsize; fwd->fd->bk_nextsize = victim->bk_nextsize->fd_nextsize = victim; } else { assert (chunk_main_arena (fwd)); while ((unsigned long) size < chunksize_nomask (fwd)) { fwd = fwd->fd_nextsize; assert (chunk_main_arena (fwd)); }

if ((unsigned long) size == (unsigned long) chunksize_nomask (fwd)) /* Always insert in the second position. */ fwd = fwd->fd; else { victim->fd_nextsize = fwd; victim->bk_nextsize = fwd->bk_nextsize; if (__glibc_unlikely (fwd->bk_nextsize->fd_nextsize != fwd)) malloc_printerr ("malloc(): largebin double linked list corrupted (nextsize)"); fwd->bk_nextsize = victim; victim->bk_nextsize->fd_nextsize = victim; } bck = fwd->bk; if (bck->fd != fwd) malloc_printerr ("malloc(): largebin double linked list corrupted (bk)"); } } else victim->fd_nextsize = victim->bk_nextsize = victim; }

mark_bin (av, victim_index); victim->bk = bck; victim->fd = fwd; fwd->bk = victim; bck->fd = victim;

</details>

#### `_int_malloc` limiete

Op hierdie punt, as daar 'n stuk in die tcache gestoor was wat gebruik kan word en die limiet bereik is, moet net **'n tcache stuk teruggee**.

Verder, as **MAX\_ITERS** bereik is, breek uit die lus en kry 'n stuk op 'n ander manier (top stuk).

As `return_cached` ingestel was, gee net 'n stuk van die tcache terug om groter soektogte te vermy.

<details>

<summary><code>_int_malloc</code> limiete</summary>
```c
// From https://github.com/bminor/glibc/blob/master/malloc/malloc.c#L4227C1-L4250C7

#if USE_TCACHE
/* If we've processed as many chunks as we're allowed while
filling the cache, return one of the cached ones.  */
++tcache_unsorted_count;
if (return_cached
&& mp_.tcache_unsorted_limit > 0
&& tcache_unsorted_count > mp_.tcache_unsorted_limit)
{
return tcache_get (tc_idx);
}
#endif

#define MAX_ITERS       10000
if (++iters >= MAX_ITERS)
break;
}

#if USE_TCACHE
/* If all the small chunks we found ended up cached, return one now.  */
if (return_cached)
{
return tcache_get (tc_idx);
}
#endif

Indien limiete nie bereik is nie, gaan voort met die kode...

Groot Bin (volgens indeks)

Indien die versoek groot is (nie in klein bin nie) en ons nog nie enige stuk teruggegee het nie, kry die indeks van die versoekte grootte in die groot bin, kontroleer of dit nie leeg is nie of as die grootste stuk in hierdie bin groter is as die versoekte grootte en in daardie geval vind die kleinste stuk wat gebruik kan word vir die versoekte grootte.

Indien die oorblywende spasie van die uiteindelik gebruikte stuk 'n nuwe stuk kan wees, voeg dit by die ongesorteerde bin en die lsast_reminder word opgedateer.

'n Sekuriteitskontrole word uitgevoer wanneer die herinnering by die ongesorteerde bin gevoeg word:

  • bck->fd-> bk != bck: malloc(): corrupted unsorted chunks

_int_malloc Groot bin (volgens indeks)

```c // From https://github.com/bminor/glibc/blob/master/malloc/malloc.c#L4252C7-L4317C10

/* If a large request, scan through the chunks of current bin in sorted order to find smallest that fits. Use the skip list for this. */

if (!in_smallbin_range (nb)) { bin = bin_at (av, idx);

/* skip scan if empty or largest chunk is too small */ if ((victim = first (bin)) != bin && (unsigned long) chunksize_nomask (victim)

= (unsigned long) (nb)) { victim = victim->bk_nextsize; while (((unsigned long) (size = chunksize (victim)) < (unsigned long) (nb))) victim = victim->bk_nextsize;

/* Avoid removing the first entry for a size so that the skip list does not have to be rerouted. */ if (victim != last (bin) && chunksize_nomask (victim) == chunksize_nomask (victim->fd)) victim = victim->fd;

remainder_size = size - nb; unlink_chunk (av, victim);

/* Exhaust / if (remainder_size < MINSIZE) { set_inuse_bit_at_offset (victim, size); if (av != &main_arena) set_non_main_arena (victim); } / Split / else { remainder = chunk_at_offset (victim, nb); / We cannot assume the unsorted list is empty and therefore have to perform a complete insert here. */ bck = unsorted_chunks (av); fwd = bck->fd; if (__glibc_unlikely (fwd->bk != bck)) malloc_printerr ("malloc(): corrupted unsorted chunks"); last_re->bk = bck; remainder->fd = fwd; bck->fd = remainder; fwd->bk = remainder; if (!in_smallbin_range (remainder_size)) { remainder->fd_nextsize = NULL; remainder->bk_nextsize = NULL; } set_head (victim, nb | PREV_INUSE | (av != &main_arena ? NON_MAIN_ARENA : 0)); set_head (remainder, remainder_size | PREV_INUSE); set_foot (remainder, remainder_size); } check_malloced_chunk (av, victim, nb); void *p = chunk2mem (victim); alloc_perturb (p, bytes); return p; } }

</details>

Indien 'n stuk nie geskik is vir hierdie doel nie, gaan voort

### Groot Bin (volgende groter)

Indien daar in die presiese groot bin geen stuk is wat gebruik kan word nie, begin om deur al die volgende groot binne te loop (beginnende by die onmiddellik groter) totdat een gevind word (indien enige).

Die oorblywende deel van die gesplete stuk word by die ongesorteerde bin gevoeg, last\_reminder word opgedateer en dieselfde sekuriteitskontrole word uitgevoer:

* `bck->fd-> bk != bck`: `malloc(): corrupted unsorted chunks2`

<details>

<summary><code>_int_malloc</code> Groot Bin (volgende groter)</summary>
```c
// From https://github.com/bminor/glibc/blob/master/malloc/malloc.c#L4319C7-L4425C10

/*
Search for a chunk by scanning bins, starting with next largest
bin. This search is strictly by best-fit; i.e., the smallest
(with ties going to approximately the least recently used) chunk
that fits is selected.

The bitmap avoids needing to check that most blocks are nonempty.
The particular case of skipping all bins during warm-up phases
when no chunks have been returned yet is faster than it might look.
*/

++idx;
bin = bin_at (av, idx);
block = idx2block (idx);
map = av->binmap[block];
bit = idx2bit (idx);

for (;; )
{
/* Skip rest of block if there are no more set bits in this block.  */
if (bit > map || bit == 0)
{
do
{
if (++block >= BINMAPSIZE) /* out of bins */
goto use_top;
}
while ((map = av->binmap[block]) == 0);

bin = bin_at (av, (block << BINMAPSHIFT));
bit = 1;
}

/* Advance to bin with set bit. There must be one. */
while ((bit & map) == 0)
{
bin = next_bin (bin);
bit <<= 1;
assert (bit != 0);
}

/* Inspect the bin. It is likely to be non-empty */
victim = last (bin);

/*  If a false alarm (empty bin), clear the bit. */
if (victim == bin)
{
av->binmap[block] = map &= ~bit; /* Write through */
bin = next_bin (bin);
bit <<= 1;
}

else
{
size = chunksize (victim);

/*  We know the first chunk in this bin is big enough to use. */
assert ((unsigned long) (size) >= (unsigned long) (nb));

remainder_size = size - nb;

/* unlink */
unlink_chunk (av, victim);

/* Exhaust */
if (remainder_size < MINSIZE)
{
set_inuse_bit_at_offset (victim, size);
if (av != &main_arena)
set_non_main_arena (victim);
}

/* Split */
else
{
remainder = chunk_at_offset (victim, nb);

/* We cannot assume the unsorted list is empty and therefore
have to perform a complete insert here.  */
bck = unsorted_chunks (av);
fwd = bck->fd;
if (__glibc_unlikely (fwd->bk != bck))
malloc_printerr ("malloc(): corrupted unsorted chunks 2");
remainder->bk = bck;
remainder->fd = fwd;
bck->fd = remainder;
fwd->bk = remainder;

/* advertise as last remainder */
if (in_smallbin_range (nb))
av->last_remainder = remainder;
if (!in_smallbin_range (remainder_size))
{
remainder->fd_nextsize = NULL;
remainder->bk_nextsize = NULL;
}
set_head (victim, nb | PREV_INUSE |
(av != &main_arena ? NON_MAIN_ARENA : 0));
set_head (remainder, remainder_size | PREV_INUSE);
set_foot (remainder, remainder_size);
}
check_malloced_chunk (av, victim, nb);
void *p = chunk2mem (victim);
alloc_perturb (p, bytes);
return p;
}
}

Topstuk

Op hierdie punt is dit tyd om 'n nuwe stuk van die Topstuk te kry (as dit groot genoeg is).

Dit begin met 'n sekuriteitskontrole om seker te maak dat die grootte van die stuk nie te groot is (beskadig nie):

  • chunksize(av->top) > av->system_mem: malloc(): corrupted top size

Dan sal dit die topstukspasie gebruik as dit groot genoeg is om 'n stuk van die gevraagde grootte te skep. Indien nie, as daar vinnige stukke is, konsolideer hulle en probeer weer. Laastens, as daar nie genoeg spasie is nie, gebruik sysmalloc om genoeg grootte toe te ken.

_int_malloc Topstuk

```c use_top: /* If large enough, split off the chunk bordering the end of memory (held in av->top). Note that this is in accord with the best-fit search rule. In effect, av->top is treated as larger (and thus less well fitting) than any other available chunk since it can be extended to be as large as necessary (up to system limitations).

We require that av->top always exists (i.e., has size >= MINSIZE) after initialization, so if it would otherwise be exhausted by current request, it is replenished. (The main reason for ensuring it exists is that we may need MINSIZE space to put in fenceposts in sysmalloc.) */

victim = av->top; size = chunksize (victim);

if (__glibc_unlikely (size > av->system_mem)) malloc_printerr ("malloc(): corrupted top size");

if ((unsigned long) (size) >= (unsigned long) (nb + MINSIZE)) { remainder_size = size - nb; remainder = chunk_at_offset (victim, nb); av->top = remainder; set_head (victim, nb | PREV_INUSE | (av != &main_arena ? NON_MAIN_ARENA : 0)); set_head (remainder, remainder_size | PREV_INUSE);

check_malloced_chunk (av, victim, nb); void *p = chunk2mem (victim); alloc_perturb (p, bytes); return p; }

/* When we are using atomic ops to free fast chunks we can get here for all block sizes. / else if (atomic_load_relaxed (&av->have_fastchunks)) { malloc_consolidate (av); / restore original bin index */ if (in_smallbin_range (nb)) idx = smallbin_index (nb); else idx = largebin_index (nb); }

/* Otherwise, relay to handle system-dependent cases */ else { void *p = sysmalloc (nb, av); if (p != NULL) alloc_perturb (p, bytes); return p; } } }

</details>

## sysmalloc

### sysmalloc begin

Indien arena nul is of die gevraagde grootte te groot is (en daar is nog toegestane mmaps oor), gebruik `sysmalloc_mmap` om ruimte toe te wijs en dit terug te gee.
```c
// From https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L2531

/*
sysmalloc handles malloc cases requiring more memory from the system.
On entry, it is assumed that av->top does not have enough
space to service request for nb bytes, thus requiring that av->top
be extended or replaced.
*/

static void *
sysmalloc (INTERNAL_SIZE_T nb, mstate av)
{
mchunkptr old_top;              /* incoming value of av->top */
INTERNAL_SIZE_T old_size;       /* its size */
char *old_end;                  /* its end address */

long size;                      /* arg to first MORECORE or mmap call */
char *brk;                      /* return value from MORECORE */

long correction;                /* arg to 2nd MORECORE call */
char *snd_brk;                  /* 2nd return val */

INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of new space */
INTERNAL_SIZE_T end_misalign;   /* partial page left at end of new space */
char *aligned_brk;              /* aligned offset into brk */

mchunkptr p;                    /* the allocated/returned chunk */
mchunkptr remainder;            /* remainder from allocation */
unsigned long remainder_size;   /* its size */


size_t pagesize = GLRO (dl_pagesize);
bool tried_mmap = false;


/*
If have mmap, and the request size meets the mmap threshold, and
the system supports mmap, and there are few enough currently
allocated mmapped regions, try to directly map this request
rather than expanding top.
*/

if (av == NULL
|| ((unsigned long) (nb) >= (unsigned long) (mp_.mmap_threshold)
&& (mp_.n_mmaps < mp_.n_mmaps_max)))
{
char *mm;
if (mp_.hp_pagesize > 0 && nb >= mp_.hp_pagesize)
{
/* There is no need to issue the THP madvise call if Huge Pages are
used directly.  */
mm = sysmalloc_mmap (nb, mp_.hp_pagesize, mp_.hp_flags, av);
if (mm != MAP_FAILED)
return mm;
}
mm = sysmalloc_mmap (nb, pagesize, 0, av);
if (mm != MAP_FAILED)
return mm;
tried_mmap = true;
}

/* There are no usable arenas and mmap also failed.  */
if (av == NULL)
return 0;

sysmalloc kontroles

Dit begin deur ou top chunk-inligting te kry en te kontroleer dat sommige van die volgende toestande waar is:

  • Die ou heap-grootte is 0 (nuwe heap)

  • Die grootte van die vorige heap is groter as MINSIZE en die ou Top is in gebruik

  • Die heap is belyn met bladsy-grootte (0x1000 sodat die laer 12-bits 0 moet wees)

Dan kontroleer dit ook dat:

  • Die ou grootte nie genoeg spasie het om 'n brokkie vir die versoekte grootte te skep

sysmalloc kontroles

```c /* Record incoming configuration of top */

old_top = av->top; old_size = chunksize (old_top); old_end = (char *) (chunk_at_offset (old_top, old_size));

brk = snd_brk = (char *) (MORECORE_FAILURE);

/* If not the first time through, we require old_size to be at least MINSIZE and to have prev_inuse set. */

assert ((old_top == initial_top (av) && old_size == 0) || ((unsigned long) (old_size) >= MINSIZE && prev_inuse (old_top) && ((unsigned long) old_end & (pagesize - 1)) == 0));

/* Precondition: not enough current space to satisfy nb request */ assert ((unsigned long) (old_size) < (unsigned long) (nb + MINSIZE));

</details>

### sysmalloc nie hoof-arena nie

Dit sal eerste probeer om die vorige heap vir hierdie heap **uit te brei**. Indien nie moontlik nie, probeer om 'n nuwe heap toe te ken en die aanwysers op te dateer om dit te kan gebruik.\
Laastens, as dit nie werk nie, probeer om **`sysmalloc_mmap`** aan te roep.&#x20;

<details>
```c
if (av != &main_arena)
{
heap_info *old_heap, *heap;
size_t old_heap_size;

/* First try to extend the current heap. */
old_heap = heap_for_ptr (old_top);
old_heap_size = old_heap->size;
if ((long) (MINSIZE + nb - old_size) > 0
&& grow_heap (old_heap, MINSIZE + nb - old_size) == 0)
{
av->system_mem += old_heap->size - old_heap_size;
set_head (old_top, (((char *) old_heap + old_heap->size) - (char *) old_top)
| PREV_INUSE);
}
else if ((heap = new_heap (nb + (MINSIZE + sizeof (*heap)), mp_.top_pad)))
{
/* Use a newly allocated heap.  */
heap->ar_ptr = av;
heap->prev = old_heap;
av->system_mem += heap->size;
/* Set up the new top.  */
top (av) = chunk_at_offset (heap, sizeof (*heap));
set_head (top (av), (heap->size - sizeof (*heap)) | PREV_INUSE);

/* Setup fencepost and free the old top chunk with a multiple of
MALLOC_ALIGNMENT in size. */
/* The fencepost takes at least MINSIZE bytes, because it might
become the top chunk again later.  Note that a footer is set
up, too, although the chunk is marked in use. */
old_size = (old_size - MINSIZE) & ~MALLOC_ALIGN_MASK;
set_head (chunk_at_offset (old_top, old_size + CHUNK_HDR_SZ),
0 | PREV_INUSE);
if (old_size >= MINSIZE)
{
set_head (chunk_at_offset (old_top, old_size),
CHUNK_HDR_SZ | PREV_INUSE);
set_foot (chunk_at_offset (old_top, old_size), CHUNK_HDR_SZ);
set_head (old_top, old_size | PREV_INUSE | NON_MAIN_ARENA);
_int_free (av, old_top, 1);
}
else
{
set_head (old_top, (old_size + CHUNK_HDR_SZ) | PREV_INUSE);
set_foot (old_top, (old_size + CHUNK_HDR_SZ));
}
}
else if (!tried_mmap)
{
/* We can at least try to use to mmap memory.  If new_heap fails
it is unlikely that trying to allocate huge pages will
succeed.  */
char *mm = sysmalloc_mmap (nb, pagesize, 0, av);
if (mm != MAP_FAILED)
return mm;
}
}

sysmalloc hoofarena

Dit begin met die berekening van die hoeveelheid geheue wat benodig word. Dit sal begin deur aaneenlopende geheue aan te vra, sodat dit in hierdie geval moontlik sal wees om die ou geheue wat nie gebruik word nie, te gebruik. Ook word daar enkele belyn-operasies uitgevoer.

sysmalloc hoofarena

```c // From https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L2665C1-L2713C10

else /* av == main_arena */

{ /* Request enough space for nb + pad + overhead */ size = nb + mp_.top_pad + MINSIZE;

/* If contiguous, we can subtract out existing space that we hope to combine with new space. We add it back later only if we don't actually get contiguous space. */

if (contiguous (av)) size -= old_size;

/* Round to a multiple of page size or huge page size. If MORECORE is not contiguous, this ensures that we only call it with whole-page arguments. And if MORECORE is contiguous and this is not first time through, this preserves page-alignment of previous calls. Otherwise, we correct to page-align below. */

#ifdef MADV_HUGEPAGE /* Defined in brk.c. */ extern void *__curbrk; if (_glibc_unlikely (mp.thp_pagesize != 0)) { uintptr_t top = ALIGN_UP ((uintptr_t) _curbrk + size, mp.thp_pagesize); size = top - (uintptr_t) __curbrk; } else #endif size = ALIGN_UP (size, GLRO(dl_pagesize));

/* Don't try to call MORECORE if argument is so big as to appear negative. Note that since mmap takes size_t arg, it may succeed below even if we cannot call MORECORE. */

if (size > 0) { brk = (char *) (MORECORE (size)); if (brk != (char *) (MORECORE_FAILURE)) madvise_thp (brk, size); LIBC_PROBE (memory_sbrk_more, 2, brk, size); }

### sysmalloc hoofarena vorige fout 1

Indien die vorige `MORECORE_FAILURE` teruggegee het, probeer weer om geheue toe te ken deur `sysmalloc_mmap_fallback` te gebruik.
```c
// From https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L2715C7-L2740C10

if (brk == (char *) (MORECORE_FAILURE))
{
/*
If have mmap, try using it as a backup when MORECORE fails or
cannot be used. This is worth doing on systems that have "holes" in
address space, so sbrk cannot extend to give contiguous space, but
space is available elsewhere.  Note that we ignore mmap max count
and threshold limits, since the space will not be used as a
segregated mmap region.
*/

char *mbrk = MAP_FAILED;
if (mp_.hp_pagesize > 0)
mbrk = sysmalloc_mmap_fallback (&size, nb, old_size,
mp_.hp_pagesize, mp_.hp_pagesize,
mp_.hp_flags, av);
if (mbrk == MAP_FAILED)
mbrk = sysmalloc_mmap_fallback (&size, nb, old_size, MMAP_AS_MORECORE_SIZE,
pagesize, 0, av);
if (mbrk != MAP_FAILED)
{
/* We do not need, and cannot use, another sbrk call to find end */
brk = mbrk;
snd_brk = brk + size;
}
}

sysmalloc hoof-arena voortgaan

Indien die vorige nie MORECORE_FAILURE teruggegee het nie, as dit gewerk het, skep enige belynings:

// From https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L2742

if (brk != (char *) (MORECORE_FAILURE))
{
if (mp_.sbrk_base == 0)
mp_.sbrk_base = brk;
av->system_mem += size;

/*
If MORECORE extends previous space, we can likewise extend top size.
*/

if (brk == old_end && snd_brk == (char *) (MORECORE_FAILURE))
set_head (old_top, (size + old_size) | PREV_INUSE);

else if (contiguous (av) && old_size && brk < old_end)
/* Oops!  Someone else killed our space..  Can't touch anything.  */
malloc_printerr ("break adjusted to free malloc space");

/*
Otherwise, make adjustments:

* If the first time through or noncontiguous, we need to call sbrk
just to find out where the end of memory lies.

* We need to ensure that all returned chunks from malloc will meet
MALLOC_ALIGNMENT

* If there was an intervening foreign sbrk, we need to adjust sbrk
request size to account for fact that we will not be able to
combine new space with existing space in old_top.

* Almost all systems internally allocate whole pages at a time, in
which case we might as well use the whole last page of request.
So we allocate enough more memory to hit a page boundary now,
which in turn causes future contiguous calls to page-align.
*/

else
{
front_misalign = 0;
end_misalign = 0;
correction = 0;
aligned_brk = brk;

/* handle contiguous cases */
if (contiguous (av))
{
/* Count foreign sbrk as system_mem.  */
if (old_size)
av->system_mem += brk - old_end;

/* Guarantee alignment of first new chunk made from this space */

front_misalign = (INTERNAL_SIZE_T) chunk2mem (brk) & MALLOC_ALIGN_MASK;
if (front_misalign > 0)
{
/*
Skip over some bytes to arrive at an aligned position.
We don't need to specially mark these wasted front bytes.
They will never be accessed anyway because
prev_inuse of av->top (and any chunk created from its start)
is always true after initialization.
*/

correction = MALLOC_ALIGNMENT - front_misalign;
aligned_brk += correction;
}

/*
If this isn't adjacent to existing space, then we will not
be able to merge with old_top space, so must add to 2nd request.
*/

correction += old_size;

/* Extend the end address to hit a page boundary */
end_misalign = (INTERNAL_SIZE_T) (brk + size + correction);
correction += (ALIGN_UP (end_misalign, pagesize)) - end_misalign;

assert (correction >= 0);
snd_brk = (char *) (MORECORE (correction));

/*
If can't allocate correction, try to at least find out current
brk.  It might be enough to proceed without failing.

Note that if second sbrk did NOT fail, we assume that space
is contiguous with first sbrk. This is a safe assumption unless
program is multithreaded but doesn't use locks and a foreign sbrk
occurred between our first and second calls.
*/

if (snd_brk == (char *) (MORECORE_FAILURE))
{
correction = 0;
snd_brk = (char *) (MORECORE (0));
}
else
madvise_thp (snd_brk, correction);
}

/* handle non-contiguous cases */
else
{
if (MALLOC_ALIGNMENT == CHUNK_HDR_SZ)
/* MORECORE/mmap must correctly align */
assert (((unsigned long) chunk2mem (brk) & MALLOC_ALIGN_MASK) == 0);
else
{
front_misalign = (INTERNAL_SIZE_T) chunk2mem (brk) & MALLOC_ALIGN_MASK;
if (front_misalign > 0)
{
/*
Skip over some bytes to arrive at an aligned position.
We don't need to specially mark these wasted front bytes.
They will never be accessed anyway because
prev_inuse of av->top (and any chunk created from its start)
is always true after initialization.
*/

aligned_brk += MALLOC_ALIGNMENT - front_misalign;
}
}

/* Find out current end of memory */
if (snd_brk == (char *) (MORECORE_FAILURE))
{
snd_brk = (char *) (MORECORE (0));
}
}

/* Adjust top based on results of second sbrk */
if (snd_brk != (char *) (MORECORE_FAILURE))
{
av->top = (mchunkptr) aligned_brk;
set_head (av->top, (snd_brk - aligned_brk + correction) | PREV_INUSE);
av->system_mem += correction;

/*
If not the first time through, we either have a
gap due to foreign sbrk or a non-contiguous region.  Insert a
double fencepost at old_top to prevent consolidation with space
we don't own. These fenceposts are artificial chunks that are
marked as inuse and are in any case too small to use.  We need
two to make sizes and alignments work out.
*/

if (old_size != 0)
{
/*
Shrink old_top to insert fenceposts, keeping size a
multiple of MALLOC_ALIGNMENT. We know there is at least
enough space in old_top to do this.
*/
old_size = (old_size - 2 * CHUNK_HDR_SZ) & ~MALLOC_ALIGN_MASK;
set_head (old_top, old_size | PREV_INUSE);

/*
Note that the following assignments completely overwrite
old_top when old_size was previously MINSIZE.  This is
intentional. We need the fencepost, even if old_top otherwise gets
lost.
*/
set_head (chunk_at_offset (old_top, old_size),
CHUNK_HDR_SZ | PREV_INUSE);
set_head (chunk_at_offset (old_top,
old_size + CHUNK_HDR_SZ),
CHUNK_HDR_SZ | PREV_INUSE);

/* If possible, release the rest. */
if (old_size >= MINSIZE)
{
_int_free (av, old_top, 1);
}
}
}
}
}
} /* if (av !=  &main_arena) */

sysmalloc finale

Voltooi die toewysing deur die arenainligting by te werk

// From https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L2921C3-L2943C12

if ((unsigned long) av->system_mem > (unsigned long) (av->max_system_mem))
av->max_system_mem = av->system_mem;
check_malloc_state (av);

/* finally, do the allocation */
p = av->top;
size = chunksize (p);

/* check that one of the above allocation paths succeeded */
if ((unsigned long) (size) >= (unsigned long) (nb + MINSIZE))
{
remainder_size = size - nb;
remainder = chunk_at_offset (p, nb);
av->top = remainder;
set_head (p, nb | PREV_INUSE | (av != &main_arena ? NON_MAIN_ARENA : 0));
set_head (remainder, remainder_size | PREV_INUSE);
check_malloced_chunk (av, p, nb);
return chunk2mem (p);
}

/* catch all failure paths */
__set_errno (ENOMEM);
return 0;

sysmalloc_mmap

sysmalloc_mmap kode

```c // From https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L2392C1-L2481C2

static void * sysmalloc_mmap (INTERNAL_SIZE_T nb, size_t pagesize, int extra_flags, mstate av) { long int size;

/* Round up size to nearest page. For mmapped chunks, the overhead is one SIZE_SZ unit larger than for normal chunks, because there is no following chunk whose prev_size field could be used.

See the front_misalign handling below, for glibc there is no need for further alignments unless we have have high alignment. */ if (MALLOC_ALIGNMENT == CHUNK_HDR_SZ) size = ALIGN_UP (nb + SIZE_SZ, pagesize); else size = ALIGN_UP (nb + SIZE_SZ + MALLOC_ALIGN_MASK, pagesize);

/* Don't try if size wraps around 0. */ if ((unsigned long) (size) <= (unsigned long) (nb)) return MAP_FAILED;

char *mm = (char *) MMAP (0, size, mtag_mmap_flags | PROT_READ | PROT_WRITE, extra_flags); if (mm == MAP_FAILED) return mm;

#ifdef MAP_HUGETLB if (!(extra_flags & MAP_HUGETLB)) madvise_thp (mm, size); #endif

__set_vma_name (mm, size, " glibc: malloc");

/* The offset to the start of the mmapped region is stored in the prev_size field of the chunk. This allows us to adjust returned start address to meet alignment requirements here and in memalign(), and still be able to compute proper address argument for later munmap in free() and realloc(). */

INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of new space */

if (MALLOC_ALIGNMENT == CHUNK_HDR_SZ) { /* For glibc, chunk2mem increases the address by CHUNK_HDR_SZ and MALLOC_ALIGN_MASK is CHUNK_HDR_SZ-1. Each mmap'ed area is page aligned and therefore definitely MALLOC_ALIGN_MASK-aligned. */ assert (((INTERNAL_SIZE_T) chunk2mem (mm) & MALLOC_ALIGN_MASK) == 0); front_misalign = 0; } else front_misalign = (INTERNAL_SIZE_T) chunk2mem (mm) & MALLOC_ALIGN_MASK;

mchunkptr p; /* the allocated/returned chunk */

if (front_misalign > 0) { ptrdiff_t correction = MALLOC_ALIGNMENT - front_misalign; p = (mchunkptr) (mm + correction); set_prev_size (p, correction); set_head (p, (size - correction) | IS_MMAPPED); } else { p = (mchunkptr) mm; set_prev_size (p, 0); set_head (p, size | IS_MMAPPED); }

/* update statistics */ int new = atomic_fetch_add_relaxed (&mp_.n_mmaps, 1) + 1; atomic_max (&mp_.max_n_mmaps, new);

unsigned long sum; sum = atomic_fetch_add_relaxed (&mp_.mmapped_mem, size) + size; atomic_max (&mp_.max_mmapped_mem, sum);

check_chunk (av, p);

return chunk2mem (p); }

</details>

<div data-gb-custom-block data-tag="hint" data-style='success'>

Leer & oefen AWS-hacking: <img src="/.gitbook/assets/arte.png" alt="" data-size="line">[**HackTricks Opleiding AWS Red Team Expert (ARTE)**](https://training.hacktricks.xyz/courses/arte)<img src="/.gitbook/assets/arte.png" alt="" data-size="line">\
Leer & oefen GCP-hacking: <img src="/.gitbook/assets/grte.png" alt="" data-size="line">[**HackTricks Opleiding GCP Red Team Expert (GRTE)**<img src="/.gitbook/assets/grte.png" alt="" data-size="line">](https://training.hacktricks.xyz/courses/grte)

<details>

<summary>Ondersteun HackTricks</summary>

* Kontroleer die [**inskrywingsplanne**](https://github.com/sponsors/carlospolop)!
* **Sluit aan by die** 💬 [**Discord-groep**](https://discord.gg/hRep4RUj7f) of die [**telegram-groep**](https://t.me/peass) of **volg** ons op **Twitter** 🐦 [**@hacktricks\_live**](https://twitter.com/hacktricks\_live)**.**
* **Deel hacking-truuks deur PR's in te dien by die** [**HackTricks**](https://github.com/carlospolop/hacktricks) en [**HackTricks Cloud**](https://github.com/carlospolop/hacktricks-cloud) github-opslag.

</details>

</div>

Last updated