Lengo la awamu hii ya tano ni rahisi sana: Kuunda usanifu wa LLM kamili. Panga kila kitu pamoja, tumia tabaka zote na uunde kazi zote za kuzalisha maandiko au kubadilisha maandiko kuwa IDs na kinyume chake.
Usanifu huu utatumika kwa mafunzo na kutabiri maandiko baada ya kufundishwa.
Mwakilishi wa kiwango cha juu unaweza kuonekana katika:
Input (Maandishi Yaliyotolewa Token): Mchakato huanza na maandiko yaliyotolewa token, ambayo yanabadilishwa kuwa uwakilishi wa nambari.
Tabaka za Token Embedding na Positional Embedding: Maandishi yaliyotolewa token yanapita kupitia tabaka za token embedding na positional embedding, ambayo inashughulikia nafasi ya tokens katika mfuatano, muhimu kwa kuelewa mpangilio wa maneno.
Transformer Blocks: Mfano una blocks 12 za transformer, kila moja ikiwa na tabaka nyingi. Blocks hizi hurudia mfuatano ufuatao:
Masked Multi-Head Attention: Inaruhusu mfano kuzingatia sehemu tofauti za maandiko ya ingizo kwa wakati mmoja.
Layer Normalization: Hatua ya kawaida ili kuimarisha na kuboresha mafunzo.
Feed Forward Layer: Inawajibika kwa kushughulikia taarifa kutoka kwa tabaka la attention na kufanya utabiri kuhusu token inayofuata.
Dropout Layers: Tabaka hizi zinazuia overfitting kwa kuacha vitengo kwa bahati nasibu wakati wa mafunzo.
Tabaka la Matokeo ya Mwisho: Mfano unatoa tensor ya 4x50,257-dimensional, ambapo 50,257 inawakilisha ukubwa wa msamiati. Kila safu katika tensor hii inahusiana na vector ambayo mfano hutumia kutabiri neno linalofuata katika mfuatano.
Lengo: Lengo ni kuchukua embeddings hizi na kuzibadilisha tena kuwa maandiko. Kwa haswa, safu ya mwisho ya matokeo inatumika kuzalisha neno linalofuata, linalowakilishwa kama "forward" katika mchoro huu.
Code representation
import torchimport torch.nn as nnimport tiktokenclassGELU(nn.Module):def__init__(self):super().__init__()defforward(self,x):return0.5* x * (1+ torch.tanh(torch.sqrt(torch.tensor(2.0/ torch.pi)) *(x +0.044715* torch.pow(x, 3))))classFeedForward(nn.Module):def__init__(self,cfg):super().__init__()self.layers = nn.Sequential(nn.Linear(cfg["emb_dim"], 4* cfg["emb_dim"]),GELU(),nn.Linear(4* cfg["emb_dim"], cfg["emb_dim"]),)defforward(self,x):return self.layers(x)classMultiHeadAttention(nn.Module):def__init__(self,d_in,d_out,context_length,dropout,num_heads,qkv_bias=False):super().__init__()assert d_out % num_heads ==0,"d_out must be divisible by num_heads"self.d_out = d_outself.num_heads = num_headsself.head_dim = d_out // num_heads # Reduce the projection dim to match desired output dimself.W_query = nn.Linear(d_in, d_out, bias=qkv_bias)self.W_key = nn.Linear(d_in, d_out, bias=qkv_bias)self.W_value = nn.Linear(d_in, d_out, bias=qkv_bias)self.out_proj = nn.Linear(d_out, d_out)# Linear layer to combine head outputsself.dropout = nn.Dropout(dropout)self.register_buffer('mask', torch.triu(torch.ones(context_length, context_length), diagonal=1))defforward(self,x):b, num_tokens, d_in = x.shapekeys = self.W_key(x)# Shape: (b, num_tokens, d_out)queries = self.W_query(x)values = self.W_value(x)# We implicitly split the matrix by adding a `num_heads` dimension# Unroll last dim: (b, num_tokens, d_out) -> (b, num_tokens, num_heads, head_dim)keys = keys.view(b, num_tokens, self.num_heads, self.head_dim)values = values.view(b, num_tokens, self.num_heads, self.head_dim)queries = queries.view(b, num_tokens, self.num_heads, self.head_dim)# Transpose: (b, num_tokens, num_heads, head_dim) -> (b, num_heads, num_tokens, head_dim)keys = keys.transpose(1, 2)queries = queries.transpose(1, 2)values = values.transpose(1, 2)# Compute scaled dot-product attention (aka self-attention) with a causal maskattn_scores = queries @ keys.transpose(2, 3)# Dot product for each head# Original mask truncated to the number of tokens and converted to booleanmask_bool = self.mask.bool()[:num_tokens,:num_tokens]# Use the mask to fill attention scoresattn_scores.masked_fill_(mask_bool, -torch.inf)attn_weights = torch.softmax(attn_scores / keys.shape[-1]**0.5, dim=-1)attn_weights = self.dropout(attn_weights)# Shape: (b, num_tokens, num_heads, head_dim)context_vec = (attn_weights @ values).transpose(1, 2)# Combine heads, where self.d_out = self.num_heads * self.head_dimcontext_vec = context_vec.contiguous().view(b, num_tokens, self.d_out)context_vec = self.out_proj(context_vec)# optional projectionreturn context_vecclassLayerNorm(nn.Module):def__init__(self,emb_dim):super().__init__()self.eps =1e-5self.scale = nn.Parameter(torch.ones(emb_dim))self.shift = nn.Parameter(torch.zeros(emb_dim))defforward(self,x):mean = x.mean(dim=-1, keepdim=True)var = x.var(dim=-1, keepdim=True, unbiased=False)norm_x = (x - mean) / torch.sqrt(var + self.eps)return self.scale * norm_x + self.shiftclassTransformerBlock(nn.Module):def__init__(self,cfg):super().__init__()self.att =MultiHeadAttention(d_in=cfg["emb_dim"],d_out=cfg["emb_dim"],context_length=cfg["context_length"],num_heads=cfg["n_heads"],dropout=cfg["drop_rate"],qkv_bias=cfg["qkv_bias"])self.ff =FeedForward(cfg)self.norm1 =LayerNorm(cfg["emb_dim"])self.norm2 =LayerNorm(cfg["emb_dim"])self.drop_shortcut = nn.Dropout(cfg["drop_rate"])defforward(self,x):# Shortcut connection for attention blockshortcut = xx = self.norm1(x)x = self.att(x)# Shape [batch_size, num_tokens, emb_size]x = self.drop_shortcut(x)x = x + shortcut # Add the original input back# Shortcut connection for feed forward blockshortcut = xx = self.norm2(x)x = self.ff(x)x = self.drop_shortcut(x)x = x + shortcut # Add the original input backreturn xclassGPTModel(nn.Module):def__init__(self,cfg):super().__init__()self.tok_emb = nn.Embedding(cfg["vocab_size"], cfg["emb_dim"])self.pos_emb = nn.Embedding(cfg["context_length"], cfg["emb_dim"])self.drop_emb = nn.Dropout(cfg["drop_rate"])self.trf_blocks = nn.Sequential(*[TransformerBlock(cfg) for _ inrange(cfg["n_layers"])])self.final_norm =LayerNorm(cfg["emb_dim"])self.out_head = nn.Linear(cfg["emb_dim"], cfg["vocab_size"], bias=False)defforward(self,in_idx):batch_size, seq_len = in_idx.shapetok_embeds = self.tok_emb(in_idx)pos_embeds = self.pos_emb(torch.arange(seq_len, device=in_idx.device))x = tok_embeds + pos_embeds # Shape [batch_size, num_tokens, emb_size]x = self.drop_emb(x)x = self.trf_blocks(x)x = self.final_norm(x)logits = self.out_head(x)return logitsGPT_CONFIG_124M ={"vocab_size":50257,# Vocabulary size"context_length":1024,# Context length"emb_dim":768,# Embedding dimension"n_heads":12,# Number of attention heads"n_layers":12,# Number of layers"drop_rate":0.1,# Dropout rate"qkv_bias":False# Query-Key-Value bias}torch.manual_seed(123)model =GPTModel(GPT_CONFIG_124M)out =model(batch)print("Input batch:\n", batch)print("\nOutput shape:", out.shape)print(out)
Kazi ya Kuamsha ya GELU
# From https://github.com/rasbt/LLMs-from-scratch/tree/main/ch04classGELU(nn.Module):def__init__(self):super().__init__()defforward(self,x):return0.5* x * (1+ torch.tanh(torch.sqrt(torch.tensor(2.0/ torch.pi)) *(x +0.044715* torch.pow(x, 3))))
Madhumuni na Ufanisi
GELU (Gaussian Error Linear Unit): Kazi ya kuamsha ambayo inaingiza kutokuwa na mstari ndani ya mfano.
Kuamsha Kunyumbulika: Tofauti na ReLU, ambayo inafanya kuwa sifuri kwa pembejeo hasi, GELU inachora kwa laini pembejeo hadi matokeo, ikiruhusu thamani ndogo, zisizo sifuri kwa pembejeo hasi.
Mwelekeo wa Kihesabu:
Lengo la matumizi ya kazi hii baada ya tabaka za mstari ndani ya tabaka la FeedForward ni kubadilisha data ya mstari kuwa isiyo ya mstari ili kuruhusu mfano kujifunza uhusiano tata, usio wa mstari.
Mtandao wa Neva wa FeedForward
Mifano imeongezwa kama maoni ili kuelewa vyema mifano ya matrices:
# From https://github.com/rasbt/LLMs-from-scratch/tree/main/ch04classFeedForward(nn.Module):def__init__(self,cfg):super().__init__()self.layers = nn.Sequential(nn.Linear(cfg["emb_dim"], 4* cfg["emb_dim"]),GELU(),nn.Linear(4* cfg["emb_dim"], cfg["emb_dim"]),)defforward(self,x):# x shape: (batch_size, seq_len, emb_dim)x = self.layers[0](x)# x shape: (batch_size, seq_len, 4 * emb_dim)x = self.layers[1](x)# x shape remains: (batch_size, seq_len, 4 * emb_dim)x = self.layers[2](x)# x shape: (batch_size, seq_len, emb_dim)return x # Output shape: (batch_size, seq_len, emb_dim)
Madhumuni na Ufanisi
Mtandao wa FeedForward Kulingana na Nafasi: Inatumia mtandao wa viwango viwili uliounganishwa kikamilifu kwa kila nafasi tofauti na kwa njia sawa.
Maelezo ya Kiwango:
Kiwango cha Kwanza cha Mstari: Kinapanua ukubwa kutoka emb_dim hadi 4 * emb_dim.
Kazi ya GELU: Inatumia kutokuwa na mstari.
Kiwango cha Pili cha Mstari: Kinapunguza ukubwa kurudi kwenye emb_dim.
Kama unavyoona, mtandao wa Feed Forward unatumia viwango 3. Kiwango cha kwanza ni kiwango cha mstari ambacho kitazidisha ukubwa kwa 4 kwa kutumia uzito wa mstari (vigezo vya kufundisha ndani ya mfano). Kisha, kazi ya GELU inatumika katika ukubwa wote ili kuleta mabadiliko yasiyo ya mstari ili kupata uwakilishi mzuri na hatimaye kiwango kingine cha mstari kinatumika kurudi kwenye ukubwa wa awali wa ukubwa.
Mekanismu ya Umakini wa Vichwa Vingi
Hii tayari imeelezwa katika sehemu ya awali.
Madhumuni na Ufanisi
Umakini wa Kujitenga wa Vichwa Vingi: Inaruhusu mfano kuzingatia nafasi tofauti ndani ya mfuatano wa ingizo wakati wa kuandika token.
Vipengele Muhimu:
Maswali, Funguo, Thamani: Mipango ya mstari ya ingizo, inayotumika kuhesabu alama za umakini.
Vichwa: Mekanismu nyingi za umakini zinazoendesha kwa sambamba (num_heads), kila moja ikiwa na ukubwa mdogo (head_dim).
Alama za Umakini: Zinahesabiwa kama bidhaa ya dot ya maswali na funguo, zimepimwa na kufichwa.
Kuficha: Mask ya sababu inatumika kuzuia mfano kuzingatia token za baadaye (muhimu kwa mifano ya autoregressive kama GPT).
Uzito wa Umakini: Softmax ya alama za umakini zilizofichwa na kupimwa.
Vector ya Muktadha: Jumla yenye uzito ya thamani, kulingana na uzito wa umakini.
Mipango ya Matokeo: Kiwango cha mstari cha kuunganisha matokeo ya vichwa vyote.
Lengo la mtandao huu ni kupata uhusiano kati ya token katika muktadha sawa. Zaidi ya hayo, token zimegawanywa katika vichwa tofauti ili kuzuia overfitting ingawa uhusiano wa mwisho uliofanywa kwa kila kichwa unachanganywa mwishoni mwa mtandao huu.
Zaidi ya hayo, wakati wa mafunzo mask ya sababu inatumika ili token za baadaye zisihesabiwe wakati wa kutafuta uhusiano maalum kwa token na dropout pia inatumika ili kuzuia overfitting.
Kiwango Kurekebisha
# From https://github.com/rasbt/LLMs-from-scratch/tree/main/ch04classLayerNorm(nn.Module):def__init__(self,emb_dim):super().__init__()self.eps =1e-5# Prevent division by zero during normalization.self.scale = nn.Parameter(torch.ones(emb_dim))self.shift = nn.Parameter(torch.zeros(emb_dim))defforward(self,x):mean = x.mean(dim=-1, keepdim=True)var = x.var(dim=-1, keepdim=True, unbiased=False)norm_x = (x - mean) / torch.sqrt(var + self.eps)return self.scale * norm_x + self.shift
Madhumuni na Ufanisi
Layer Normalization: Mbinu inayotumika kurekebisha ingizo kati ya vipengele (embedding dimensions) kwa kila mfano binafsi katika kundi.
Vipengele:
eps: Kiwango kidogo (1e-5) kinachoongezwa kwenye variance ili kuzuia kugawanya kwa sifuri wakati wa normalization.
scale na shift: Vigezo vinavyoweza kujifunza (nn.Parameter) vinavyomruhusu modeli kupima na kuhamasisha matokeo yaliyorekebishwa. Vimeanzishwa kuwa moja na sifuri, mtawalia.
Mchakato wa Kurekebisha:
Hesabu Mean (mean): Hesabu mean ya ingizo x kati ya dimension ya embedding (dim=-1), ikihifadhi dimension kwa ajili ya broadcasting (keepdim=True).
Hesabu Variance (var): Hesabu variance ya x kati ya dimension ya embedding, pia ikihifadhi dimension. Kigezo cha unbiased=False kinahakikisha kuwa variance inahesabiwa kwa kutumia mhesabu wa biased (kugawanya kwa N badala ya N-1), ambacho ni sahihi wakati wa kurekebisha juu ya vipengele badala ya sampuli.
Normalize (norm_x): Inapunguza mean kutoka x na kugawanya kwa mzizi wa variance pamoja na eps.
Scale na Shift: Inatumia vigezo vinavyoweza kujifunza scale na shift kwa matokeo yaliyorekebishwa.
Lengo ni kuhakikisha mean ya 0 na variance ya 1 kati ya dimensions zote za token sawa. Lengo la hili ni kuimarisha mafunzo ya mitandao ya neva ya kina kwa kupunguza mabadiliko ya ndani ya covariate, ambayo inahusisha mabadiliko katika usambazaji wa uhamasishaji wa mtandao kutokana na kubadilishwa kwa vigezo wakati wa mafunzo.
Transformer Block
Mifano imeongezwa kama maoni ili kuelewa vyema sura za matrices:
Muundo wa Tabaka: Inachanganya umakini wa vichwa vingi, mtandao wa feedforward, urekebishaji wa tabaka, na muunganisho wa ziada.
Urekebishaji wa Tabaka: Unatumika kabla ya tabaka za umakini na feedforward kwa mafunzo thabiti.
Muunganisho wa Ziada (Mifupisho): Ongeza ingizo la tabaka kwa matokeo yake ili kuboresha mtiririko wa gradient na kuwezesha mafunzo ya mitandao yenye kina.
Dropout: Unatumika baada ya tabaka za umakini na feedforward kwa ajili ya urekebishaji.
Ufanisi wa Hatua kwa Hatua
Njia ya Kwanza ya Ziada (Umakini wa Kibinafsi):
Ingizo (shortcut): Hifadhi ingizo la awali kwa muunganisho wa ziada.
Urekebishaji wa Tabaka (norm1): Rekebisha ingizo.
Umakini wa Vichwa Vingi (att): Tumia umakini wa kibinafsi.
Dropout (drop_shortcut): Tumia dropout kwa urekebishaji.
Ongeza Ziada (x + shortcut): Changanya na ingizo la awali.
Njia ya Pili ya Ziada (FeedForward):
Ingizo (shortcut): Hifadhi ingizo lililosasishwa kwa muunganisho wa ziada unaofuata.
Urekebishaji wa Tabaka (norm2): Rekebisha ingizo.
Mtandao wa FeedForward (ff): Tumia mabadiliko ya feedforward.
Dropout (drop_shortcut): Tumia dropout.
Ongeza Ziada (x + shortcut): Changanya na ingizo kutoka kwa njia ya kwanza ya ziada.
Bloku ya transformer inakusanya mitandao yote pamoja na kutumia urekebishaji na dropouts kuboresha utulivu wa mafunzo na matokeo.
Kumbuka jinsi dropouts zinavyofanywa baada ya matumizi ya kila mtandao wakati urekebishaji unatumika kabla.
Zaidi ya hayo, inatumia mifupisho ambayo inajumuisha kuongeza matokeo ya mtandao na ingizo lake. Hii husaidia kuzuia tatizo la gradient inayopotea kwa kuhakikisha kuwa tabaka za mwanzo zinachangia "kiasi" sawa na zile za mwisho.
GPTModel
Umbo umeongezwa kama maoni ili kuelewa vyema umbo la matrices:
Token Embeddings (tok_emb): Hubadilisha viashiria vya token kuwa embeddings. Kama ukumbusho, hizi ni uzito zinazotolewa kwa kila kipimo cha kila token katika msamiati.
Positional Embeddings (pos_emb): Inaongeza taarifa za nafasi kwa embeddings ili kukamata mpangilio wa token. Kama ukumbusho, hizi ni uzito zinazotolewa kwa token kulingana na nafasi yake katika maandiko.
Dropout (drop_emb): Inatumika kwa embeddings kwa ajili ya regularisation.
Transformer Blocks (trf_blocks): Safu ya n_layers transformer blocks ili kushughulikia embeddings.
Final Normalization (final_norm): Kiwango cha normalization kabla ya safu ya matokeo.
Output Layer (out_head): Inatoa hali za mwisho zilizofichwa kwa ukubwa wa msamiati ili kutoa logits kwa ajili ya utabiri.
Lengo la darasa hili ni kutumia mitandao mingine yote iliyotajwa ili kutabiri token inayofuata katika mfuatano, ambayo ni muhimu kwa kazi kama vile uzalishaji wa maandiko.
Kumbuka jinsi itakavy tumia transformer blocks nyingi kadri zilivyoonyeshwa na kwamba kila transformer block inatumia neti moja ya multi-head attestation, neti moja ya feed forward na normalizations kadhaa. Hivyo ikiwa transformer blocks 12 zinatumika, ongeza hii kwa 12.
Zaidi ya hayo, safu ya normalization inaongezwa kabla ya matokeo na safu ya mwisho ya linear inatumika mwishoni kupata matokeo yenye vipimo sahihi. Kumbuka jinsi kila vector ya mwisho ina ukubwa wa msamiati ulio tumika. Hii ni kwa sababu inajaribu kupata uwezekano kwa kila token inayowezekana ndani ya msamiati.
Idadi ya Vigezo vya kufundisha
Baada ya muundo wa GPT kufafanuliwa, inawezekana kugundua idadi ya vigezo vya kufundisha:
GPT_CONFIG_124M ={"vocab_size":50257,# Vocabulary size"context_length":1024,# Context length"emb_dim":768,# Embedding dimension"n_heads":12,# Number of attention heads"n_layers":12,# Number of layers"drop_rate":0.1,# Dropout rate"qkv_bias":False# Query-Key-Value bias}model =GPTModel(GPT_CONFIG_124M)total_params =sum(p.numel() for p in model.parameters())print(f"Total number of parameters: {total_params:,}")# Total number of parameters: 163,009,536
Hatua kwa Hatua Hesabu
1. Tabaka za Kuunganisha: Token Embedding & Position Embedding
Kuwa na mfano unaotabiri token inayofuata kama ile ya awali, inahitajika tu kuchukua thamani za token za mwisho kutoka kwa matokeo (kama zitakuwa zile za token inayotabiriwa), ambayo itakuwa thamani kwa kila kipengee katika msamiati na kisha tumia kazi ya softmax kuimarisha vipimo kuwa uwezekano vinavyos suma 1 na kisha pata index ya kipengee kikubwa zaidi, ambacho kitakuwa index ya neno ndani ya msamiati.
defgenerate_text_simple(model,idx,max_new_tokens,context_size):# idx is (batch, n_tokens) array of indices in the current contextfor _ inrange(max_new_tokens):# Crop current context if it exceeds the supported context size# E.g., if LLM supports only 5 tokens, and the context size is 10# then only the last 5 tokens are used as contextidx_cond = idx[:,-context_size:]# Get the predictionswith torch.no_grad():logits =model(idx_cond)# Focus only on the last time step# (batch, n_tokens, vocab_size) becomes (batch, vocab_size)logits = logits[:,-1,:]# Apply softmax to get probabilitiesprobas = torch.softmax(logits, dim=-1)# (batch, vocab_size)# Get the idx of the vocab entry with the highest probability valueidx_next = torch.argmax(probas, dim=-1, keepdim=True)# (batch, 1)# Append sampled index to the running sequenceidx = torch.cat((idx, idx_next), dim=1)# (batch, n_tokens+1)return idxstart_context ="Hello, I am"encoded = tokenizer.encode(start_context)print("encoded:", encoded)encoded_tensor = torch.tensor(encoded).unsqueeze(0)print("encoded_tensor.shape:", encoded_tensor.shape)model.eval()# disable dropoutout =generate_text_simple(model=model,idx=encoded_tensor,max_new_tokens=6,context_size=GPT_CONFIG_124M["context_length"])print("Output:", out)print("Output length:", len(out[0]))