(Hakuna ukaguzi ulioelezwa katika muhtasari huu na baadhi ya kesi zimeachwa kwa kifupi)
__libc_malloc inajaribu kupata kipande kutoka kwa tcache, ikiwa hakuna, inaita _int_malloc
_int_malloc :
Inajaribu kuzalisha uwanja ikiwa haupo
Ikiwa kuna kipande cha bakuli la haraka la saizi sahihi, litumie
Jaza tcache na vipande vingine vya haraka
Ikiwa kuna kipande cha bakuli dogo la saizi sahihi, litumie
Jaza tcache na vipande vingine vya saizi hiyo
Ikiwa saizi inayohitajika sio kwa bakuli ndogo, unganisha bakuli la haraka na bakuli lisilo na mpangilio
Angalia bakuli lisilo na mpangilio, tumia kipande cha kwanza chenye nafasi ya kutosha
Ikiwa kipande kilichopatikana ni kikubwa, kigawe ili kurudisha sehemu na kuongeza kumbukumbu iliyobaki kwenye bakuli lisilo na mpangilio
Ikiwa kipande ni saizi sawa na ile inayohitajika, tumia kujaza tcache badala ya kurudisha (hadi tcache itakapojaa, kisha rudisha kipande kifuatacho)
Kwa kila kipande cha saizi ndogo kilichochunguzwa, weka kwenye bakuli lake dogo au kubwa husika
Angalia bakuli kubwa katika kiashiria cha saizi inayohitajika
Anza kutazama kutoka kwa kipande cha kwanza kilichobwa kuliko saizi inayohitajika, ikiwa kuna, rudisha na ongeza kumbukumbu kwa bakuli dogo
Angalia bakuli kubwa kutoka viashiria vya pili hadi mwisho
Kutoka kwa kiashiria kikubwa kinachofuata, angalia kipande chochote, gawa kipande cha kwanza kilichopatikana kutumia kwa saizi inayohitajika na ongeza kumbukumbu kwa bakuli lisilo na mpangilio
Ikiwa hakuna kitu kilichopatikana katika bakuli za awali, pata kipande kutoka kwa kipande cha juu
Ikiwa kipande cha juu hakikuwa kikubwa vya kutosha, engeza na sysmalloc
__libc_malloc
Kazi ya malloc kimsingi inaita __libc_malloc. Kazi hii itachunguza tcache kuona ikiwa kuna kipande kinachopatikana cha saizi inayotakiwa. Ikiwa kipo, itakitumia na ikiwa hakipo, itachunguza ikiwa ni mnyororo mmoja wa wateja na katika kesi hiyo itaita _int_malloc katika uwanja mkuu, na ikiwa sivyo, itaita _int_malloc katika uwanja wa mnyororo.
// From https://github.com/bminor/glibc/blob/master/malloc/malloc.c#ifIS_IN (libc)void*__libc_malloc (size_t bytes){mstate ar_ptr;void*victim;_Static_assert (PTRDIFF_MAX <= SIZE_MAX /2,"PTRDIFF_MAX is not more than half of SIZE_MAX");if (!__malloc_initialized)ptmalloc_init ();#ifUSE_TCACHE/* int_free also calls request2size, be careful to not pad twice. */size_t tbytes =checked_request2size (bytes);if (tbytes ==0){__set_errno (ENOMEM);returnNULL;}size_t tc_idx =csize2tidx (tbytes);MAYBE_INIT_TCACHE ();DIAG_PUSH_NEEDS_COMMENT;if (tc_idx <mp_.tcache_bins&& tcache !=NULL&&tcache->counts[tc_idx] >0){victim =tcache_get (tc_idx);returntag_new_usable (victim);}DIAG_POP_NEEDS_COMMENT;#endifif (SINGLE_THREAD_P){victim =tag_new_usable (_int_malloc (&main_arena, bytes));assert (!victim || chunk_is_mmapped (mem2chunk (victim)) ||&main_arena == arena_for_chunk (mem2chunk (victim)));return victim;}arena_get (ar_ptr, bytes);victim =_int_malloc (ar_ptr, bytes);/* Retry with another arena only if we were able to find a usable arenabefore. */if (!victim && ar_ptr !=NULL){LIBC_PROBE (memory_malloc_retry,1, bytes);ar_ptr =arena_get_retry (ar_ptr, bytes);victim =_int_malloc (ar_ptr, bytes);}if (ar_ptr !=NULL)__libc_lock_unlock (ar_ptr->mutex);victim =tag_new_usable (victim);assert (!victim || chunk_is_mmapped (mem2chunk (victim)) ||ar_ptr == arena_for_chunk (mem2chunk (victim)));return victim;}
Tazama jinsi itakavyoweka lebo kwenye pointer iliyorejeshwa daima na tag_new_usable, kutoka kwenye msimbo:
void*tag_new_usable (void*ptr)Allocate a new random color and use it to color the user region ofa chunk; this may include data from the subsequent chunk's headerif tagging is sufficiently fine grained. Returns PTR suitablyrecolored for accessing the memory there.
_int_malloc
Hii ni function inayoweza kugawa kumbukumbu kwa kutumia bins nyingine na kipande cha juu.
Kuanza
Inaanza kwa kufafanua baadhi ya vars na kupata ukubwa halisi ambao nafasi ya kumbukumbu ya ombi inahitaji kuwa nayo:
_int_malloc start
```c // From https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L3847 static void * _int_malloc (mstate av, size_t bytes) { INTERNAL_SIZE_T nb; /* normalized request size */ unsigned int idx; /* associated bin index */ mbinptr bin; /* associated bin */
mchunkptr victim; /* inspected/selected chunk / INTERNAL_SIZE_T size; / its size / int victim_index; / its bin index */
mchunkptr remainder; /* remainder from a split / unsigned long remainder_size; / its size */
unsigned int block; /* bit map traverser / unsigned int bit; / bit map traverser / unsigned int map; / current word of binmap */
mchunkptr fwd; /* misc temp for linking / mchunkptr bck; / misc temp for linking */
/* Convert request size to internal form by adding SIZE_SZ bytes overhead plus possibly more to obtain necessary alignment and/or to obtain a size of at least MINSIZE, the smallest allocatable size. Also, checked_request2size returns false for request sizes that are so large that they wrap around zero when padded and aligned. */
nb = checked_request2size (bytes); if (nb == 0) { __set_errno (ENOMEM); return NULL; }
</details>
### Uwanja
Katika tukio lisilowezekana kwamba hakuna uwanja wa kutumika, inatumia `sysmalloc` kupata kipande kutoka `mmap`:
<details>
<summary>_int_malloc si uwanja</summary>
```c
// From https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L3885C3-L3893C6
/* There are no usable arenas. Fall back to sysmalloc to get a chunk from
mmap. */
if (__glibc_unlikely (av == NULL))
{
void *p = sysmalloc (nb, av);
if (p != NULL)
alloc_perturb (p, bytes);
return p;
}
Fast Bin
Ikiwa ukubwa unaohitajika uko ndani ya ukubwa wa Fast Bins, jaribu kutumia kipande kutoka kwa fast bin. Kimsingi, kulingana na ukubwa, itapata index ya fast bin ambapo vipande halali vinapaswa kuwa vipo, na ikiwa kipo, itarudisha moja kati ya hizo. Zaidi ya hayo, ikiwa tcache imewezeshwa, itajaza tcache bin ya ukubwa huo na fast bins.
Wakati wa kutekeleza hatua hizi, ukaguzi wa usalama fulani hufanywa hapa:
Ikiwa kipande hakijalingana: malloc(): kipande cha fastbin kilichojilinganisha 2
Ikiwa kipande cha mbele hakilingani: malloc(): kipande cha fastbin kilichojilinganisha
Ikiwa kipande kilichorudishwa kina ukubwa usio sahihi kwa sababu ya index yake kwenye fast bin: malloc(): ufisadi wa kumbukumbu (haraka)
Ikiwa kipande chochote kilichotumika kujaza tcache hakilingani: malloc(): kipande cha fastbin kilichojilinganisha 3
_int_malloc fast bin
```c // From https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L3895C3-L3967C6 /* If the size qualifies as a fastbin, first check corresponding bin. This code is safe to execute even if av is not yet initialized, so we can try it without checking, which saves some time on this fast path. */
if (victim != NULL) { if (__glibc_unlikely (misaligned_chunk (victim))) malloc_printerr ("malloc(): unaligned fastbin chunk detected 2");
if (SINGLE_THREAD_P) fb = REVEAL_PTR (victim->fd); else REMOVE_FB (fb, pp, victim); if (__glibc_likely (victim != NULL)) { size_t victim_idx = fastbin_index (chunksize (victim)); if (__builtin_expect (victim_idx != idx, 0)) malloc_printerr ("malloc(): memory corruption (fast)"); check_remalloced_chunk (av, victim, nb); #if USE_TCACHE / While we're here, if we see other chunks of the same size, stash them in the tcache. */ size_t tc_idx = csize2tidx (nb); if (tcache != NULL && tc_idx < mp_.tcache_bins) { mchunkptr tc_victim;
/* While bin not empty and tcache not full, copy chunks. */ while (tcache->counts[tc_idx] < mp_.tcache_count && (tc_victim = *fb) != NULL) { if (__glibc_unlikely (misaligned_chunk (tc_victim))) malloc_printerr ("malloc(): unaligned fastbin chunk detected 3"); if (SINGLE_THREAD_P) *fb = REVEAL_PTR (tc_victim->fd); else { REMOVE_FB (fb, pp, tc_victim); if (__glibc_unlikely (tc_victim == NULL)) break; } tcache_put (tc_victim, tc_idx); } } #endif void *p = chunk2mem (victim); alloc_perturb (p, bytes); return p; } } }
</details>
### Bin ndogo
Kama ilivyoelezwa katika maoni, small bins hushikilia saizi moja kwa kila index, hivyo kuangalia ikiwa kipande halali kipo ni haraka sana, kwa hivyo baada ya fast bins, small bins huchunguzwa.
Uchunguzi wa kwanza ni kujua ikiwa saizi iliyohitajika inaweza kuwa ndani ya small bin. Katika kesi hiyo, pata **index** inayolingana ndani ya small bin na uone ikiwa kuna **kipande kinachopatikana**.
Kisha, ukaguzi wa usalama hufanywa ukiangalia:
- ikiwa `victim->bk->fd = victim`. Ili kuona kwamba vipande vyote vimeunganishwa kwa usahihi.
Katika kesi hiyo, kipande **kinapata biti ya `inuse`,** orodha iliyolingana imefanyiwa marekebisho ili kipande hiki kiondoke kutoka kwake (kwani kitatumika), na biti ya uga wa sio uwanja mkuu inawekwa ikiwa inahitajika.
Hatimaye, **jaza index ya tcache ya saizi iliyohitajika** na vipande vingine ndani ya small bin (ikiwa ipo).
<details>
<summary>_int_malloc small bin</summary>
```c
// From https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L3895C3-L3967C6
/*
If a small request, check regular bin. Since these "smallbins"
hold one size each, no searching within bins is necessary.
(For a large request, we need to wait until unsorted chunks are
processed to find best fit. But for small ones, fits are exact
anyway, so we can check now, which is faster.)
*/
if (in_smallbin_range (nb))
{
idx = smallbin_index (nb);
bin = bin_at (av, idx);
if ((victim = last (bin)) != bin)
{
bck = victim->bk;
if (__glibc_unlikely (bck->fd != victim))
malloc_printerr ("malloc(): smallbin double linked list corrupted");
set_inuse_bit_at_offset (victim, nb);
bin->bk = bck;
bck->fd = bin;
if (av != &main_arena)
set_non_main_arena (victim);
check_malloced_chunk (av, victim, nb);
#if USE_TCACHE
/* While we're here, if we see other chunks of the same size,
stash them in the tcache. */
size_t tc_idx = csize2tidx (nb);
if (tcache != NULL && tc_idx < mp_.tcache_bins)
{
mchunkptr tc_victim;
/* While bin not empty and tcache not full, copy chunks over. */
while (tcache->counts[tc_idx] < mp_.tcache_count
&& (tc_victim = last (bin)) != bin)
{
if (tc_victim != 0)
{
bck = tc_victim->bk;
set_inuse_bit_at_offset (tc_victim, nb);
if (av != &main_arena)
set_non_main_arena (tc_victim);
bin->bk = bck;
bck->fd = bin;
tcache_put (tc_victim, tc_idx);
}
}
}
#endif
void *p = chunk2mem (victim);
alloc_perturb (p, bytes);
return p;
}
}
malloc_consolidate
Ikiwa sio kipande kidogo, ni kipande kikubwa, na katika kesi hii malloc_consolidate huitwa kuepuka ufaaji wa kumbukumbu.
/*If this is a large request, consolidate fastbins before continuing.While it might look excessive to kill all fastbins beforeeven seeing if there is space available, this avoidsfragmentation problems normally associated with fastbins.Also, in practice, programs tend to have runs of either small orlarge requests, but less often mixtures, so consolidation is notinvoked all that often in most programs. And the programs thatit is called frequently in otherwise tend to fragment.*/else{idx =largebin_index (nb);if (atomic_load_relaxed (&av->have_fastchunks))malloc_consolidate (av);}
Kazi ya kufunga malloc kimsingi inaondoa vipande kutoka kwa bakuli la haraka na kuvitia kwenye bakuli lisilo na mpangilio. Baada ya malloc inayofuata vipande hivi vitapangwa katika bakuli zao ndogo/haraka husika.
Tafadhali kumbuka kwamba wakati wa kuondoa vipande hivi, ikiwa vitapatikana na vipande vilivyopita au vifuatavyo ambavyo havitumiwi vitakuwa havijapendwa na kufungwa kabla ya kuweka kipande cha mwisho kwenye bakuli lisilo na mpangilio.
Kwa kila kipande cha bakuli la haraka, ukaguzi wa usalama unafanywa:
Ikiwa kipande hakiko sawa, kuzindua: malloc_consolidate(): kipande cha haraka kisichofungamana kiligunduliwa
Ikiwa kipande kina ukubwa tofauti na ile inayopaswa kwa sababu ya kiashiria kilichomo: malloc_consolidate(): ukubwa batili wa kipande
Ikiwa kipande kilichopita hakitumiwi na kipande kilichopita kina ukubwa tofauti na ule ulioonyeshwa na kipande_kilichopita: ukubwa ulioharibika dhidi ya ukubwa wa awali katika fastbins
static void malloc_consolidate(mstate av) { mfastbinptr* fb; /* current fastbin being consolidated / mfastbinptr maxfb; /* last fastbin (for loop control) / mchunkptr p; / current chunk being consolidated / mchunkptr nextp; / next chunk to consolidate / mchunkptr unsorted_bin; / bin header / mchunkptr first_unsorted; / chunk to link to */
/* These have same use as in free() */ mchunkptr nextchunk; INTERNAL_SIZE_T size; INTERNAL_SIZE_T nextsize; INTERNAL_SIZE_T prevsize; int nextinuse;
/* Remove each chunk from fast bin and consolidate it, placing it then in unsorted bin. Among other reasons for doing this, placing in unsorted bin avoids needing to calculate actual bins until malloc is sure that chunks aren't immediately going to be reused anyway. */
maxfb = &fastbin (av, NFASTBINS - 1); fb = &fastbin (av, 0); do { p = atomic_exchange_acquire (fb, NULL); if (p != 0) { do { { if (__glibc_unlikely (misaligned_chunk (p))) malloc_printerr ("malloc_consolidate(): " "unaligned fastbin chunk detected");
unsigned int idx = fastbin_index (chunksize (p)); if ((&fastbin (av, idx)) != fb) malloc_printerr ("malloc_consolidate(): invalid chunk size"); }
</details>
### Bakuli lisilojumuishwa
Ni wakati wa kuangalia bakuli lisilojumuishwa kwa kipande halali kinachoweza kutumika.
#### Kuanza
Hii inaanza na kwa kwa kubwa ambayo itakuwa ikipitia bakuli lisilojumuishwa kwa mwelekeo wa `bk` hadi itakapofika mwisho (muundo wa uwanja) na `while ((mwathirika = vipande vilivyochanganyika (av)->bk) != vipande vilivyochanganyika (av))` 
Zaidi ya hayo, ukaguzi wa usalama hufanywa kila wakati kipande kipya kinachukuliwa:
* Ikiwa ukubwa wa kipande ni wa ajabu (mdogo sana au mkubwa sana): `malloc(): ukubwa usiofaa (lisilojumuishwa)`
* Ikiwa ukubwa wa kipande kinachofuata ni wa ajabu (mdogo sana au mkubwa sana): `malloc(): ukubwa usiofaa ufuatao (lisilojumuishwa)`
* Ikiwa ukubwa uliotangazwa na kipande kinachofuata unatofautiana na ukubwa wa kipande: `malloc(): kutofautiana next->prev_size (lisilojumuishwa)`
* Ikiwa si `mwathirika->bck->fd == mwathirika` au si `mwathirika->fd == av` (uwanja): `malloc(): orodha iliyoharibiwa ya viungo vya mara mbili (lisilojumuishwa)`
* Kwa kuwa tunakagua daima ya mwisho, `fd` yake inapaswa kuwa ikielekeza daima kwa muundo wa uwanja.
* Ikiwa kipande kinachofuata hakionyeshi kuwa kipande kilichopita kina matumizi: `malloc(): next->prev_inuse batili (lisilojumuishwa)`
<details>
<summary><code>_int_malloc</code> kuanza kwa bakuli lisilojumuishwa</summary>
```c
/*
Process recently freed or remaindered chunks, taking one only if
it is exact fit, or, if this a small request, the chunk is remainder from
the most recent non-exact fit. Place other traversed chunks in
bins. Note that this step is the only place in any routine where
chunks are placed in bins.
The outer loop here is needed because we might not realize until
near the end of malloc that we should have consolidated, so must
do so and retry. This happens at most once, and only when we would
otherwise need to expand memory to service a "small" request.
*/
#if USE_TCACHE
INTERNAL_SIZE_T tcache_nb = 0;
size_t tc_idx = csize2tidx (nb);
if (tcache != NULL && tc_idx < mp_.tcache_bins)
tcache_nb = nb;
int return_cached = 0;
tcache_unsorted_count = 0;
#endif
for (;; )
{
int iters = 0;
while ((victim = unsorted_chunks (av)->bk) != unsorted_chunks (av))
{
bck = victim->bk;
size = chunksize (victim);
mchunkptr next = chunk_at_offset (victim, size);
if (__glibc_unlikely (size <= CHUNK_HDR_SZ)
|| __glibc_unlikely (size > av->system_mem))
malloc_printerr ("malloc(): invalid size (unsorted)");
if (__glibc_unlikely (chunksize_nomask (next) < CHUNK_HDR_SZ)
|| __glibc_unlikely (chunksize_nomask (next) > av->system_mem))
malloc_printerr ("malloc(): invalid next size (unsorted)");
if (__glibc_unlikely ((prev_size (next) & ~(SIZE_BITS)) != size))
malloc_printerr ("malloc(): mismatching next->prev_size (unsorted)");
if (__glibc_unlikely (bck->fd != victim)
|| __glibc_unlikely (victim->fd != unsorted_chunks (av)))
malloc_printerr ("malloc(): unsorted double linked list corrupted");
if (__glibc_unlikely (prev_inuse (next)))
malloc_printerr ("malloc(): invalid next->prev_inuse (unsorted)");
ikiwa in_smallbin_range
Ikiwa kipande ni kikubwa kuliko ukubwa ulioombwa litumike, na weka sehemu iliyobaki ya kipande kwenye orodha isiyopangwa na sasisha last_remainder nayo.
_int_malloc orodha isiyopangwa in_smallbin_range
```c // From https://github.com/bminor/glibc/blob/master/malloc/malloc.c#L4090C11-L4124C14
/* If a small request, try to use last remainder if it is the only chunk in unsorted bin. This helps promote locality for runs of consecutive small requests. This is the only exception to best-fit, and applies only when there is no exact fit for a small chunk. */
</details>
Ikiwa hii ilikuwa mafanikio, rudisha kipande na hiyo ndiyo mwisho, failure kama sivyo, endelea kutekeleza kazi...
#### ikiwa saizi ni sawa
Endelea kuondoa kipande kutoka kwenye bin, kama saizi iliyohitajika ni sawa na ile ya kipande:
* Ikiwa tcache haijajazwa, weka kipande hicho kwenye tcache na endelea kuonyesha kuwa kuna kipande cha tcache kinachoweza kutumika
* Ikiwa tcache imejaa, litumie tu kwa kurudisha
```c
// From https://github.com/bminor/glibc/blob/master/malloc/malloc.c#L4126C11-L4157C14
/* remove from unsorted list */
unsorted_chunks (av)->bk = bck;
bck->fd = unsorted_chunks (av);
/* Take now instead of binning if exact fit */
if (size == nb)
{
set_inuse_bit_at_offset (victim, size);
if (av != &main_arena)
set_non_main_arena (victim);
#if USE_TCACHE
/* Fill cache first, return to user only if cache fills.
We may return one of these chunks later. */
if (tcache_nb > 0
&& tcache->counts[tc_idx] < mp_.tcache_count)
{
tcache_put (victim, tc_idx);
return_cached = 1;
continue;
}
else
{
#endif
check_malloced_chunk (av, victim, nb);
void *p = chunk2mem (victim);
alloc_perturb (p, bytes);
return p;
#if USE_TCACHE
}
#endif
}
Ikiwa kipande hakijarudishwa au kuongezwa kwenye tcache, endelea na kanuni...
weka kipande kwenye bakuli
Hifadhi kipande kilichochunguzwa kwenye bakuli dogo au kwenye bakuli kubwa kulingana na ukubwa wa kipande (ukiweka bakuli kubwa vizuri).
Kuna ukaguzi wa usalama unaoendelea kuhakikisha kuwa orodha iliyofungwa maradufu ya bakuli kubwa imeharibiwa:
Ikiwa fwd->bk_nextsize->fd_nextsize != fwd: malloc(): orodha iliyofungwa maradufu ya bakuli kubwa imeharibiwa (nextsize)
Ikiwa fwd->bk->fd != fwd: malloc(): orodha iliyofungwa maradufu ya bakuli kubwa imeharibiwa (bk)
/* maintain large bins in sorted order / if (fwd != bck) { / Or with inuse bit to speed comparisons / size |= PREV_INUSE; / if smaller than smallest, bypass loop below */ assert (chunk_main_arena (bck->bk)); if ((unsigned long) (size) < (unsigned long) chunksize_nomask (bck->bk)) { fwd = bck; bck = bck->bk;
#### Vipimo vya `_int_malloc`
Kufikia hatua hii, ikiwa kipande fulani kilihifadhiwa kwenye tcache ambacho kinaweza kutumika na kikomo kimefikiwa, basi **rudisha kipande cha tcache**.
Zaidi ya hayo, ikiwa **MAX\_ITERS** imefikiwa, vunja kutoka kwenye mzunguko na pata kipande kwa njia tofauti (kipande cha juu).
Ikiwa `return_cached` imewekwa, rudisha kipande kutoka kwenye tcache ili kuepuka utafutaji mkubwa zaidi.
```c
// From https://github.com/bminor/glibc/blob/master/malloc/malloc.c#L4227C1-L4250C7
#if USE_TCACHE
/* If we've processed as many chunks as we're allowed while
filling the cache, return one of the cached ones. */
++tcache_unsorted_count;
if (return_cached
&& mp_.tcache_unsorted_limit > 0
&& tcache_unsorted_count > mp_.tcache_unsorted_limit)
{
return tcache_get (tc_idx);
}
#endif
#define MAX_ITERS 10000
if (++iters >= MAX_ITERS)
break;
}
#if USE_TCACHE
/* If all the small chunks we found ended up cached, return one now. */
if (return_cached)
{
return tcache_get (tc_idx);
}
#endif
Ikiwa vikwazo havijafikiwa, endelea na nambari...
Bin Kubwa (kwa index)
Ikiwa ombi ni kubwa (siyo kwenye bin ndogo) na bado hatujarudisha kipande chochote, pata index ya ukubwa ulioombwa kwenye bin kubwa, angalia kama si tupu au kama kipande kikubwa zaidi katika huu bin ni kikubwa zaidi kuliko ukubwa ulioombwa na katika kesi hiyo pata kipande kidogo zaidi kinachoweza kutumika kwa ukubwa ulioombwa.
Ikiwa nafasi iliyobaki kutoka kwenye kipande kilichotumika mwishowe inaweza kuwa kipande kipya, ongeza kwenye bin isiyopangwa na lsast_reminder inasasishwa.
Uchunguzi wa usalama unafanywa wakati wa kuongeza kipande kipya kwenye bin isiyopangwa:
bck->fd-> bk != bck: malloc(): corrupted unsorted chunks
_int_malloc Bin Kubwa (kwa index)
```c // From https://github.com/bminor/glibc/blob/master/malloc/malloc.c#L4252C7-L4317C10
/* If a large request, scan through the chunks of current bin in sorted order to find smallest that fits. Use the skip list for this. */
if (!in_smallbin_range (nb)) { bin = bin_at (av, idx);
/* skip scan if empty or largest chunk is too small */ if ((victim = first (bin)) != bin && (unsigned long) chunksize_nomask (victim)
/* Avoid removing the first entry for a size so that the skip list does not have to be rerouted. */ if (victim != last (bin) && chunksize_nomask (victim) == chunksize_nomask (victim->fd)) victim = victim->fd;
/* Exhaust / if (remainder_size < MINSIZE) { set_inuse_bit_at_offset (victim, size); if (av != &main_arena) set_non_main_arena (victim); } / Split / else { remainder = chunk_at_offset (victim, nb); / We cannot assume the unsorted list is empty and therefore have to perform a complete insert here. */ bck = unsorted_chunks (av); fwd = bck->fd; if (__glibc_unlikely (fwd->bk != bck)) malloc_printerr ("malloc(): corrupted unsorted chunks"); last_re->bk = bck; remainder->fd = fwd; bck->fd = remainder; fwd->bk = remainder; if (!in_smallbin_range (remainder_size)) { remainder->fd_nextsize = NULL; remainder->bk_nextsize = NULL; } set_head (victim, nb | PREV_INUSE | (av != &main_arena ? NON_MAIN_ARENA : 0)); set_head (remainder, remainder_size | PREV_INUSE); set_foot (remainder, remainder_size); } check_malloced_chunk (av, victim, nb); void *p = chunk2mem (victim); alloc_perturb (p, bytes); return p; } }
</details>
Ikiwa kipande hakipatikani kufaa kwa hili, endelea
### Bakuli Kubwa (kikubwa kifuatacho)
Ikiwa hakukuwa na kipande chochote kinachoweza kutumika kwenye bakuli kubwa la moja kwa moja, anza kutafuta kupitia bakuli kubwa zote zifuatazo (kuanzia kwa kubwa mara moja) hadi kimoja kinapatikana (ikiwapo kipo).
Kukumbusha cha kipande kilichogawanywa huongezwa kwenye bakuli lisilo na mpangilio, `last_reminder` inasasishwa na ukaguzi sawa wa usalama unafanywa:
* `bck->fd-> bk != bck`: `malloc(): corrupted unsorted chunks2`
<details>
<summary><code>_int_malloc</code> Bakuli Kubwa (kikubwa kifuatacho)</summary>
```c
// From https://github.com/bminor/glibc/blob/master/malloc/malloc.c#L4319C7-L4425C10
/*
Search for a chunk by scanning bins, starting with next largest
bin. This search is strictly by best-fit; i.e., the smallest
(with ties going to approximately the least recently used) chunk
that fits is selected.
The bitmap avoids needing to check that most blocks are nonempty.
The particular case of skipping all bins during warm-up phases
when no chunks have been returned yet is faster than it might look.
*/
++idx;
bin = bin_at (av, idx);
block = idx2block (idx);
map = av->binmap[block];
bit = idx2bit (idx);
for (;; )
{
/* Skip rest of block if there are no more set bits in this block. */
if (bit > map || bit == 0)
{
do
{
if (++block >= BINMAPSIZE) /* out of bins */
goto use_top;
}
while ((map = av->binmap[block]) == 0);
bin = bin_at (av, (block << BINMAPSHIFT));
bit = 1;
}
/* Advance to bin with set bit. There must be one. */
while ((bit & map) == 0)
{
bin = next_bin (bin);
bit <<= 1;
assert (bit != 0);
}
/* Inspect the bin. It is likely to be non-empty */
victim = last (bin);
/* If a false alarm (empty bin), clear the bit. */
if (victim == bin)
{
av->binmap[block] = map &= ~bit; /* Write through */
bin = next_bin (bin);
bit <<= 1;
}
else
{
size = chunksize (victim);
/* We know the first chunk in this bin is big enough to use. */
assert ((unsigned long) (size) >= (unsigned long) (nb));
remainder_size = size - nb;
/* unlink */
unlink_chunk (av, victim);
/* Exhaust */
if (remainder_size < MINSIZE)
{
set_inuse_bit_at_offset (victim, size);
if (av != &main_arena)
set_non_main_arena (victim);
}
/* Split */
else
{
remainder = chunk_at_offset (victim, nb);
/* We cannot assume the unsorted list is empty and therefore
have to perform a complete insert here. */
bck = unsorted_chunks (av);
fwd = bck->fd;
if (__glibc_unlikely (fwd->bk != bck))
malloc_printerr ("malloc(): corrupted unsorted chunks 2");
remainder->bk = bck;
remainder->fd = fwd;
bck->fd = remainder;
fwd->bk = remainder;
/* advertise as last remainder */
if (in_smallbin_range (nb))
av->last_remainder = remainder;
if (!in_smallbin_range (remainder_size))
{
remainder->fd_nextsize = NULL;
remainder->bk_nextsize = NULL;
}
set_head (victim, nb | PREV_INUSE |
(av != &main_arena ? NON_MAIN_ARENA : 0));
set_head (remainder, remainder_size | PREV_INUSE);
set_foot (remainder, remainder_size);
}
check_malloced_chunk (av, victim, nb);
void *p = chunk2mem (victim);
alloc_perturb (p, bytes);
return p;
}
}
Kipande cha Juu
Kufikia wakati huu, ni wakati wa kupata kipande kipya kutoka kwa Kipande cha Juu (ikiwa kina ukubwa wa kutosha).
Inaanza na ukaguzi wa usalama kuhakikisha kuwa ukubwa wa kipande hauko mkubwa sana (umeharibika):
chunksize(av->top) > av->system_mem: malloc(): corrupted top size
Kisha, itatumia nafasi ya kipande cha juu ikiwa ni kubwa vya kutosha kuunda kipande cha ukubwa uliotakiwa.
Ikiwa la, ikiwa kuna vipande vya haraka, vijumuishe na jaribu tena.
Hatimaye, ikiwa hakuna nafasi ya kutosha tumia sysmalloc kuwezesha ukubwa wa kutosha.
_int_malloc Kipande cha Juu
```c use_top: /* If large enough, split off the chunk bordering the end of memory (held in av->top). Note that this is in accord with the best-fit search rule. In effect, av->top is treated as larger (and thus less well fitting) than any other available chunk since it can be extended to be as large as necessary (up to system limitations).
We require that av->top always exists (i.e., has size >= MINSIZE) after initialization, so if it would otherwise be exhausted by current request, it is replenished. (The main reason for ensuring it exists is that we may need MINSIZE space to put in fenceposts in sysmalloc.) */
victim = av->top; size = chunksize (victim);
if (__glibc_unlikely (size > av->system_mem)) malloc_printerr ("malloc(): corrupted top size");
/* When we are using atomic ops to free fast chunks we can get here for all block sizes. / else if (atomic_load_relaxed (&av->have_fastchunks)) { malloc_consolidate (av); / restore original bin index */ if (in_smallbin_range (nb)) idx = smallbin_index (nb); else idx = largebin_index (nb); }
</details>
## sysmalloc
### kuanza sysmalloc
Ikiwa uwanja ni wa kufa au ukubwa uliotakiwa ni mkubwa sana (na kuna mmaps zilizobaki zinazoruhusiwa) tumia `sysmalloc_mmap` kuweka nafasi na kuirudisha.
```c
// From https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L2531
/*
sysmalloc handles malloc cases requiring more memory from the system.
On entry, it is assumed that av->top does not have enough
space to service request for nb bytes, thus requiring that av->top
be extended or replaced.
*/
static void *
sysmalloc (INTERNAL_SIZE_T nb, mstate av)
{
mchunkptr old_top; /* incoming value of av->top */
INTERNAL_SIZE_T old_size; /* its size */
char *old_end; /* its end address */
long size; /* arg to first MORECORE or mmap call */
char *brk; /* return value from MORECORE */
long correction; /* arg to 2nd MORECORE call */
char *snd_brk; /* 2nd return val */
INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of new space */
INTERNAL_SIZE_T end_misalign; /* partial page left at end of new space */
char *aligned_brk; /* aligned offset into brk */
mchunkptr p; /* the allocated/returned chunk */
mchunkptr remainder; /* remainder from allocation */
unsigned long remainder_size; /* its size */
size_t pagesize = GLRO (dl_pagesize);
bool tried_mmap = false;
/*
If have mmap, and the request size meets the mmap threshold, and
the system supports mmap, and there are few enough currently
allocated mmapped regions, try to directly map this request
rather than expanding top.
*/
if (av == NULL
|| ((unsigned long) (nb) >= (unsigned long) (mp_.mmap_threshold)
&& (mp_.n_mmaps < mp_.n_mmaps_max)))
{
char *mm;
if (mp_.hp_pagesize > 0 && nb >= mp_.hp_pagesize)
{
/* There is no need to issue the THP madvise call if Huge Pages are
used directly. */
mm = sysmalloc_mmap (nb, mp_.hp_pagesize, mp_.hp_flags, av);
if (mm != MAP_FAILED)
return mm;
}
mm = sysmalloc_mmap (nb, pagesize, 0, av);
if (mm != MAP_FAILED)
return mm;
tried_mmap = true;
}
/* There are no usable arenas and mmap also failed. */
if (av == NULL)
return 0;
Ukaguzi wa sysmalloc
Inaanza kwa kupata habari za kikundi cha juu cha zamani na kuangalia kwamba baadhi ya hali zifuatazo ni za kweli:
Ukubwa wa zamani wa kikundi ni 0 (kikundi kipya)
Ukubwa wa kikundi cha awali ni kubwa kuliko MINSIZE na kikundi cha juu cha zamani kina matumizi
Kikundi kinafaa kwa ukubwa wa ukurasa (0x1000 hivyo bits 12 za chini lazima ziwe 0)
Kisha pia inakagua kwamba:
Ukubwa wa zamani haujatosha kuunda kipande kwa ukubwa uliotakiwa
/* Precondition: not enough current space to satisfy nb request */ assert ((unsigned long) (old_size) < (unsigned long) (nb + MINSIZE));
</details>
### sysmalloc si uwanja mkuu
Kwanza itajaribu **kupanua** rundo la awali kwa rundo hili. Ikiwa haiwezekani, jaribu **kuweka rundo jipya** na sasisha pointa ili uweze kulitumia. Mwishowe, ikiwa haitafanikiwa, jaribu kuita **`sysmalloc_mmap`**. 
<details>
<summary>sysmalloc si uwanja mkuu</summary>
```c
if (av != &main_arena)
{
heap_info *old_heap, *heap;
size_t old_heap_size;
/* First try to extend the current heap. */
old_heap = heap_for_ptr (old_top);
old_heap_size = old_heap->size;
if ((long) (MINSIZE + nb - old_size) > 0
&& grow_heap (old_heap, MINSIZE + nb - old_size) == 0)
{
av->system_mem += old_heap->size - old_heap_size;
set_head (old_top, (((char *) old_heap + old_heap->size) - (char *) old_top)
| PREV_INUSE);
}
else if ((heap = new_heap (nb + (MINSIZE + sizeof (*heap)), mp_.top_pad)))
{
/* Use a newly allocated heap. */
heap->ar_ptr = av;
heap->prev = old_heap;
av->system_mem += heap->size;
/* Set up the new top. */
top (av) = chunk_at_offset (heap, sizeof (*heap));
set_head (top (av), (heap->size - sizeof (*heap)) | PREV_INUSE);
/* Setup fencepost and free the old top chunk with a multiple of
MALLOC_ALIGNMENT in size. */
/* The fencepost takes at least MINSIZE bytes, because it might
become the top chunk again later. Note that a footer is set
up, too, although the chunk is marked in use. */
old_size = (old_size - MINSIZE) & ~MALLOC_ALIGN_MASK;
set_head (chunk_at_offset (old_top, old_size + CHUNK_HDR_SZ),
0 | PREV_INUSE);
if (old_size >= MINSIZE)
{
set_head (chunk_at_offset (old_top, old_size),
CHUNK_HDR_SZ | PREV_INUSE);
set_foot (chunk_at_offset (old_top, old_size), CHUNK_HDR_SZ);
set_head (old_top, old_size | PREV_INUSE | NON_MAIN_ARENA);
_int_free (av, old_top, 1);
}
else
{
set_head (old_top, (old_size + CHUNK_HDR_SZ) | PREV_INUSE);
set_foot (old_top, (old_size + CHUNK_HDR_SZ));
}
}
else if (!tried_mmap)
{
/* We can at least try to use to mmap memory. If new_heap fails
it is unlikely that trying to allocate huge pages will
succeed. */
char *mm = sysmalloc_mmap (nb, pagesize, 0, av);
if (mm != MAP_FAILED)
return mm;
}
}
sysmalloc eneo kuu
Inaanza kuhesabu kiasi cha kumbukumbu kinachohitajika. Itaanza kwa kuomba kumbukumbu inayopatikana kwa mpangilio ili katika kesi hii iwezekane kutumia kumbukumbu ya zamani isiyotumiwa. Pia baadhi ya operesheni za upangaji zinafanywa.
sysmalloc eneo kuu
```c // From https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L2665C1-L2713C10
else /* av == main_arena */
{ /* Request enough space for nb + pad + overhead */ size = nb + mp_.top_pad + MINSIZE;
/* If contiguous, we can subtract out existing space that we hope to combine with new space. We add it back later only if we don't actually get contiguous space. */
if (contiguous (av)) size -= old_size;
/* Round to a multiple of page size or huge page size. If MORECORE is not contiguous, this ensures that we only call it with whole-page arguments. And if MORECORE is contiguous and this is not first time through, this preserves page-alignment of previous calls. Otherwise, we correct to page-align below. */
#ifdef MADV_HUGEPAGE /* Defined in brk.c. */ extern void *__curbrk; if (_glibc_unlikely (mp.thp_pagesize != 0)) { uintptr_t top = ALIGN_UP ((uintptr_t) _curbrk + size, mp.thp_pagesize); size = top - (uintptr_t) __curbrk; } else #endif size = ALIGN_UP (size, GLRO(dl_pagesize));
/* Don't try to call MORECORE if argument is so big as to appear negative. Note that since mmap takes size_t arg, it may succeed below even if we cannot call MORECORE. */
### sysmalloc eneo kuu la awali kosa 1
Ikiwa ilirudiwa `MORECORE_FAILURE`, jaribu tena kutenga kumbukumbu kutumia `sysmalloc_mmap_fallback`
```c
// From https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L2715C7-L2740C10
if (brk == (char *) (MORECORE_FAILURE))
{
/*
If have mmap, try using it as a backup when MORECORE fails or
cannot be used. This is worth doing on systems that have "holes" in
address space, so sbrk cannot extend to give contiguous space, but
space is available elsewhere. Note that we ignore mmap max count
and threshold limits, since the space will not be used as a
segregated mmap region.
*/
char *mbrk = MAP_FAILED;
if (mp_.hp_pagesize > 0)
mbrk = sysmalloc_mmap_fallback (&size, nb, old_size,
mp_.hp_pagesize, mp_.hp_pagesize,
mp_.hp_flags, av);
if (mbrk == MAP_FAILED)
mbrk = sysmalloc_mmap_fallback (&size, nb, old_size, MMAP_AS_MORECORE_SIZE,
pagesize, 0, av);
if (mbrk != MAP_FAILED)
{
/* We do not need, and cannot use, another sbrk call to find end */
brk = mbrk;
snd_brk = brk + size;
}
}
sysmalloc eneo kuu la kuendelea
Ikiwa hatua iliyopita haikurudisha MORECORE_FAILURE, ikiwa imefanya kazi tengeneza baadhi ya upangaji:
// From https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L2742if (brk != (char*) (MORECORE_FAILURE)){if (mp_.sbrk_base ==0)mp_.sbrk_base = brk;av->system_mem += size;/*If MORECORE extends previous space, we can likewise extend top size.*/if (brk == old_end && snd_brk == (char*) (MORECORE_FAILURE))set_head (old_top, (size + old_size) | PREV_INUSE);elseif (contiguous (av)&& old_size && brk < old_end)/* Oops! Someone else killed our space.. Can't touch anything. */malloc_printerr ("break adjusted to free malloc space");/*Otherwise, make adjustments:* If the first time through or noncontiguous, we need to call sbrkjust to find out where the end of memory lies.* We need to ensure that all returned chunks from malloc will meetMALLOC_ALIGNMENT* If there was an intervening foreign sbrk, we need to adjust sbrkrequest size to account for fact that we will not be able tocombine new space with existing space in old_top.* Almost all systems internally allocate whole pages at a time, inwhich case we might as well use the whole last page of request.So we allocate enough more memory to hit a page boundary now,which in turn causes future contiguous calls to page-align.*/else{front_misalign =0;end_misalign =0;correction =0;aligned_brk = brk;/* handle contiguous cases */if (contiguous (av)){/* Count foreign sbrk as system_mem. */if (old_size)av->system_mem += brk - old_end;/* Guarantee alignment of first new chunk made from this space */front_misalign = (INTERNAL_SIZE_T) chunk2mem (brk)& MALLOC_ALIGN_MASK;if (front_misalign >0){/*Skip over some bytes to arrive at an aligned position.We don't need to specially mark these wasted front bytes.They will never be accessed anyway becauseprev_inuse of av->top (and any chunk created from its start)is always true after initialization.*/correction = MALLOC_ALIGNMENT - front_misalign;aligned_brk += correction;}/*If this isn't adjacent to existing space, then we will notbe able to merge with old_top space, so must add to 2nd request.*/correction += old_size;/* Extend the end address to hit a page boundary */end_misalign = (INTERNAL_SIZE_T) (brk + size + correction);correction += (ALIGN_UP (end_misalign, pagesize)) - end_misalign;assert (correction >=0);snd_brk = (char*) (MORECORE (correction));/*If can't allocate correction, try to at least find out currentbrk. It might be enough to proceed without failing.Note that if second sbrk did NOT fail, we assume that spaceis contiguous with first sbrk. This is a safe assumption unlessprogram is multithreaded but doesn't use locks and a foreign sbrkoccurred between our first and second calls.*/if (snd_brk == (char*) (MORECORE_FAILURE)){correction =0;snd_brk = (char*) (MORECORE (0));}elsemadvise_thp (snd_brk, correction);}/* handle non-contiguous cases */else{if (MALLOC_ALIGNMENT == CHUNK_HDR_SZ)/* MORECORE/mmap must correctly align */assert (((unsignedlong) chunk2mem (brk) & MALLOC_ALIGN_MASK) ==0);else{front_misalign = (INTERNAL_SIZE_T) chunk2mem (brk)& MALLOC_ALIGN_MASK;if (front_misalign >0){/*Skip over some bytes to arrive at an aligned position.We don't need to specially mark these wasted front bytes.They will never be accessed anyway becauseprev_inuse of av->top (and any chunk created from its start)is always true after initialization.*/aligned_brk += MALLOC_ALIGNMENT - front_misalign;}}/* Find out current end of memory */if (snd_brk == (char*) (MORECORE_FAILURE)){snd_brk = (char*) (MORECORE (0));}}/* Adjust top based on results of second sbrk */if (snd_brk != (char*) (MORECORE_FAILURE)){av->top = (mchunkptr) aligned_brk;set_head (av->top, (snd_brk - aligned_brk + correction) | PREV_INUSE);av->system_mem += correction;/*If not the first time through, we either have agap due to foreign sbrk or a non-contiguous region. Insert adouble fencepost at old_top to prevent consolidation with spacewe don't own. These fenceposts are artificial chunks that aremarked as inuse and are in any case too small to use. We needtwo to make sizes and alignments work out.*/if (old_size !=0){/*Shrink old_top to insert fenceposts, keeping size amultiple of MALLOC_ALIGNMENT. We know there is at leastenough space in old_top to do this.*/old_size = (old_size -2* CHUNK_HDR_SZ) &~MALLOC_ALIGN_MASK;set_head (old_top, old_size | PREV_INUSE);/*Note that the following assignments completely overwriteold_top when old_size was previously MINSIZE. This isintentional. We need the fencepost, even if old_top otherwise getslost.*/set_head (chunk_at_offset (old_top, old_size),CHUNK_HDR_SZ | PREV_INUSE);set_head (chunk_at_offset (old_top,old_size + CHUNK_HDR_SZ),CHUNK_HDR_SZ | PREV_INUSE);/* If possible, release the rest. */if (old_size >= MINSIZE){_int_free (av, old_top,1);}}}}}} /* if (av != &main_arena) */
sysmalloc mwisho
Maliza ukadiriaji kwa kusasisha habari za uwanja
// From https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L2921C3-L2943C12if ((unsignedlong) av->system_mem > (unsignedlong) (av->max_system_mem))av->max_system_mem = av->system_mem;check_malloc_state (av);/* finally, do the allocation */p = av->top;size =chunksize (p);/* check that one of the above allocation paths succeeded */if ((unsignedlong) (size) >= (unsignedlong) (nb + MINSIZE)){remainder_size = size - nb;remainder =chunk_at_offset (p, nb);av->top = remainder;set_head (p, nb | PREV_INUSE | (av !=&main_arena ? NON_MAIN_ARENA :0));set_head (remainder, remainder_size | PREV_INUSE);check_malloced_chunk (av, p, nb);returnchunk2mem (p);}/* catch all failure paths */__set_errno (ENOMEM);return0;
sysmalloc_mmap
Kificho cha sysmalloc_mmap
```c // From https://github.com/bminor/glibc/blob/f942a732d37a96217ef828116ebe64a644db18d7/malloc/malloc.c#L2392C1-L2481C2
static void * sysmalloc_mmap (INTERNAL_SIZE_T nb, size_t pagesize, int extra_flags, mstate av) { long int size;
/* Round up size to nearest page. For mmapped chunks, the overhead is one SIZE_SZ unit larger than for normal chunks, because there is no following chunk whose prev_size field could be used.
See the front_misalign handling below, for glibc there is no need for further alignments unless we have have high alignment. */ if (MALLOC_ALIGNMENT == CHUNK_HDR_SZ) size = ALIGN_UP (nb + SIZE_SZ, pagesize); else size = ALIGN_UP (nb + SIZE_SZ + MALLOC_ALIGN_MASK, pagesize);
/* Don't try if size wraps around 0. */ if ((unsigned long) (size) <= (unsigned long) (nb)) return MAP_FAILED;
#ifdef MAP_HUGETLB if (!(extra_flags & MAP_HUGETLB)) madvise_thp (mm, size); #endif
__set_vma_name (mm, size, " glibc: malloc");
/* The offset to the start of the mmapped region is stored in the prev_size field of the chunk. This allows us to adjust returned start address to meet alignment requirements here and in memalign(), and still be able to compute proper address argument for later munmap in free() and realloc(). */
INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of new space */
if (MALLOC_ALIGNMENT == CHUNK_HDR_SZ) { /* For glibc, chunk2mem increases the address by CHUNK_HDR_SZ and MALLOC_ALIGN_MASK is CHUNK_HDR_SZ-1. Each mmap'ed area is page aligned and therefore definitely MALLOC_ALIGN_MASK-aligned. */ assert (((INTERNAL_SIZE_T) chunk2mem (mm) & MALLOC_ALIGN_MASK) == 0); front_misalign = 0; } else front_misalign = (INTERNAL_SIZE_T) chunk2mem (mm) & MALLOC_ALIGN_MASK;
/* update statistics */ int new = atomic_fetch_add_relaxed (&mp_.n_mmaps, 1) + 1; atomic_max (&mp_.max_n_mmaps, new);
unsigned long sum; sum = atomic_fetch_add_relaxed (&mp_.mmapped_mem, size) + size; atomic_max (&mp_.max_mmapped_mem, sum);
check_chunk (av, p);
return chunk2mem (p); }
</details>
<div data-gb-custom-block data-tag="hint" data-style='success'>
Jifunze na zoea AWS Hacking:<img src="/.gitbook/assets/arte.png" alt="" data-size="line">[**Mafunzo ya HackTricks ya Mtaalam wa Timu Nyekundu ya AWS (ARTE)**](https://training.hacktricks.xyz/courses/arte)<img src="/.gitbook/assets/arte.png" alt="" data-size="line">\
Jifunze na zoea GCP Hacking: <img src="/.gitbook/assets/grte.png" alt="" data-size="line">[**Mafunzo ya HackTricks ya Mtaalam wa Timu Nyekundu ya GCP (GRTE)**<img src="/.gitbook/assets/grte.png" alt="" data-size="line">](https://training.hacktricks.xyz/courses/grte)
<details>
<summary>Support HackTricks</summary>
* Angalia [**mpango wa usajili**](https://github.com/sponsors/carlospolop)!
* **Jiunge na** 💬 [**Kikundi cha Discord**](https://discord.gg/hRep4RUj7f) au kikundi cha [**telegram**](https://t.me/peass) au **tufuate** kwenye **Twitter** 🐦 [**@hacktricks\_live**](https://twitter.com/hacktricks\_live)**.**
* **Shiriki mbinu za udukuzi kwa kuwasilisha PRs kwa** [**HackTricks**](https://github.com/carlospolop/hacktricks) na [**HackTricks Cloud**](https://github.com/carlospolop/hacktricks-cloud) github repos.
</details>
</div>