LLM Training - Data Preparation
https://camo.githubusercontent.com/3c0ab9c55cefa10b667f1014b6c42df901fa330bb2bc9cea88885e784daec8ba/68747470733a2f2f73656261737469616e72617363686b612e636f6d2f696d616765732f4c4c4d732d66726f6d2d736372617463682d696d616765732f636830355f636f6d707265737365642f63726f73732d656e74726f70792e776562703https://camo.githubusercontent.com/3c0ab9c55cefa10b667f1014b6c42df901fa330bb2bc9cea88885e784daec8ba/68747470733a2f2f73656261737469616e72617363686b612e636f6d2f696d616765732f4c4c4d732d66726f6d2d736372617463682d696d616765732f636830355f636f6d707265737365642f63726f73732d656e74726f70792e776562703f313233These are my notes from the book https://www.manning.com/books/build-a-large-language-model-from-scratch (very recommended)
Pretraining
The pre-training phase of a LLM is the moment where the LLM gets a lot of data that makes the LLM learn about the language and everything in general. This base is usually later used to fine-tune it in order to specialise the model into a specific topic.
Main LLM components
Usually a LLM is characterised for the configuration used to train it. This are the common components when training a LLM:
Parameters: Parameters are the learnable weights and biases in the neural network. These are the numbers that the training process adjusts to minimize the loss function and improve the model's performance on the task. LLMs usually use millions of parameters.
Embedding Dimension: The size of the vector used to represent each token or word. LLMs usually sue billions of dimensions.
Hidden Dimension: The size of the hidden layers in the neural network.
Number of Layers (Depth): How many layers the model has. LLMs usually use tens of layers.
Number of Attention Heads: In transformer models, this is how many separate attention mechanisms are used in each layer. LLMs usually use tens of heads.
Dropout: Dropout is something like the percentage of data that is removed (probabilities turn to 0) during training used to prevent overfitting. LLMs usually use between 0-20%.
Configuration of the GPT-2 model:
Tokenizing
Tokenizing consists on separating the data in specific chunks and assign them specific IDs (numbers).
A very simple tokenizer for texts might to just get each word of a text separately, and also punctuation symbols and remove spaces.
Therefore, "Hello, world!"
would be: ["Hello", ",", "world", "!"]
Then, in order to assign each of the words and symbols a token ID (number), it's needed to create the tokenizer vocabulary. If you are tokenizing for example a book, this could be all the different word of the book in alphabetic order with some extra tokens like:
[BOS] (Beginning of sequence)
: Placed at the beggining of a text, it indicates the start of a text (used to separate none related texts).[EOS] (End of sequence)
: Placed at the end of a text, it indicates the end of a text (used to separate none related texts).[PAD] (padding)
: When a batch size is larger than one (usually), this token is used to incrase the length of that batch to be as bigger as the others.[UNK] (unknown)
: To represent unknown words.
Following the example, having tokenized a text assigning each word and symbol of the text a position in the vocabulary, the tokenized sentence "Hello, world!"
-> ["Hello", ",", "world", "!"]
would be something like: [64, 455, 78, 467]
supposing that Hello
is at pos 64, ","
is at pos 455
... in the resulting vocabulary array.
However, if in the text used to generate the vocabulary the word "Bye"
didn't exist, this will result in: "Bye, world!"
-> ["[UNK]", ",", "world", "!"]
-> [987, 455, 78, 467]
supposing the token for [UNK]
is at 987.
BPE - Byte Pair Encoding
In order to avoid problems like needing to tokenize all the possible words for texts, LLMs like GPT used BPE which basically encodes frequent pairs of bytes to reduce the size of the text in a more optimized format until it cannot be reduced more (check wikipedia). Note that this way there aren't "unknown" words for the vocabulary and the final vocabulary will be all the discovered sets of frequent bytes together grouped as much as possible while bytes that aren't frequently linked with the same byte will be a token themselves.
Code Example
Let's understand this better from a code example from https://github.com/rasbt/LLMs-from-scratch/blob/main/ch02/01_main-chapter-code/ch02.ipynb:
Data Sampling
LLMs like GPT work by predicting the next word based on the previous ones, therefore in order to prepare some data for training it's necessary to prepare the data this way.
For example, using the text "Lorem ipsum dolor sit amet, consectetur adipiscing elit,"
In order to prepare the model to learn predicting the following word (supposing each word is a token using the very basic tokenizer), and using a max size of 4 and a sliding window of 1, this is how the text should be prepared:
Note that if the sliding window would have been 2, it means that the next entry in the input array will start 2 tokens after and not just one, but the target array will still be predicting only 1 token. In pytorch
, this sliding window is expressed in the parameter stride
(the smaller stride
is, the more overfitting, usually this is equals to the max_length so the same tokens aren't repeated).
Code Example
Let's understand this better from a code example from https://github.com/rasbt/LLMs-from-scratch/blob/main/ch02/01_main-chapter-code/ch02.ipynb:
Token Embeddings
Now that we have all the text encoded in tokens it's time to create token embeddings. This embeddings are going to be the weights given each token in the vocabulary on each dimension to train. They usually start by being random small values .
For example, for a vocabulary of size 6 and 3 dimensions (LLMs has ten of thousands of vocabs and billions of dimensions), this is how it's possible to generate some starting embeddings:
Note how each token in the vocabulary (each of the 6
rows), has 3
dimensions (3
columns) with a value on each.
Therefore, in our training, each token will have a set of values (dimensions) that will apply weights to it. Therefore, if a training batch is of size 8
, with max length of 4
and 256
dimensions. It means that each batch will be a matrix of 8 x 4 x 256
(imagine batches of hundreds of entries, with hundreds of tokens per entries with billions of dimensions...).
The values of the dimensions are fine tuned during the training.
Token Positions Embeddings
If you noticed, the embeddings gives some weights to tokens based only on the token. So if a word (supposing a word is a token) is at the beginning of a text, it'll have the same weights as if it's at the end, although its contributions to the sentence might be different.
Therefore, it's possible to apply absolute positional embeddings or relative positional embeddings. One will take into account the position of the token in the whole sentence, while the other will take into account distances between tokens. OpenAI GPT uses absolute positional embeddings.
Note that because absolute positional embeddings uses the same dimensions as the token embeddings, they will be added with them but won't add extra dimensions to the matrix.
The position values are fine tuned during the training.
Code Example
Following with the code example from https://github.com/rasbt/LLMs-from-scratch/blob/main/ch02/01_main-chapter-code/ch02.ipynb:
Attention Mechanisms and Self-Attention in Neural Networks
Attention mechanisms allow neural networks to focus on specific parts of the input when generating each part of the output. They assign different weights to different inputs, helping the model decide which inputs are most relevant to the task at hand. This is crucial in tasks like machine translation, where understanding the context of the entire sentence is necessary for accurate translation.
Understanding Attention Mechanisms
In traditional sequence-to-sequence models used for language translation, the model encodes an input sequence into a fixed-size context vector. However, this approach struggles with long sentences because the fixed-size context vector may not capture all necessary information. Attention mechanisms address this limitation by allowing the model to consider all input tokens when generating each output token.
Example: Machine Translation
Consider translating the German sentence "Kannst du mir helfen diesen Satz zu übersetzen" into English. A word-by-word translation would not produce a grammatically correct English sentence due to differences in grammatical structures between languages. An attention mechanism enables the model to focus on relevant parts of the input sentence when generating each word of the output sentence, leading to a more accurate and coherent translation.
Introduction to Self-Attention
Self-attention, or intra-attention, is a mechanism where attention is applied within a single sequence to compute a representation of that sequence. It allows each token in the sequence to attend to all other tokens, helping the model capture dependencies between tokens regardless of their distance in the sequence.
Key Concepts
Tokens: Individual elements of the input sequence (e.g., words in a sentence).
Embeddings: Vector representations of tokens, capturing semantic information.
Attention Weights: Values that determine the importance of each token relative to others.
Calculating Attention Weights: A Step-by-Step Example
Let's consider the sentence "Hello shiny sun!" and represent each word with a 3-dimensional embedding:
Hello:
[0.34, 0.22, 0.54]
shiny:
[0.53, 0.34, 0.98]
sun:
[0.29, 0.54, 0.93]
Our goal is to compute the context vector for the word "shiny" using self-attention.
Step 1: Compute Attention Scores
Just multiply each dimension value of the query with the relevant one of each token and add the results. You get 1 value per pair of tokens.
For each word in the sentence, compute the attention score with respect to "shiny" by calculating the dot product of their embeddings.
Attention Score between "Hello" and "shiny"
Attention Score between "shiny" and "shiny"
Attention Score between "sun" and "shiny"
Step 2: Normalize Attention Scores to Obtain Attention Weights
Don't get lost in the mathematical terms, the goal of this function is simple, normalize all the weights so they sum 1 in total.
Moreover, softmax function is used because it accentuates differences due to the exponential part, making easier to detect useful values.
Apply the softmax function to the attention scores to convert them into attention weights that sum to 1.
Calculating the exponentials:
Calculating the sum:
Calculating attention weights:
Step 3: Compute the Context Vector
Just get each attention weight and multiply it to the related token dimensions and then sum all the dimensions to get just 1 vector (the context vector)
The context vector is computed as the weighted sum of the embeddings of all words, using the attention weights.
Calculating each component:
Weighted Embedding of "Hello":
Weighted Embedding of "shiny":
Weighted Embedding of "sun":
Summing the weighted embeddings:
context vector=[0.0779+0.2156+0.1057, 0.0504+0.1382+0.1972, 0.1237+0.3983+0.3390]=[0.3992,0.3858,0.8610]
This context vector represents the enriched embedding for the word "shiny," incorporating information from all words in the sentence.
Summary of the Process
Compute Attention Scores: Use the dot product between the embedding of the target word and the embeddings of all words in the sequence.
Normalize Scores to Get Attention Weights: Apply the softmax function to the attention scores to obtain weights that sum to 1.
Compute Context Vector: Multiply each word's embedding by its attention weight and sum the results.
Self-Attention with Trainable Weights
In practice, self-attention mechanisms use trainable weights to learn the best representations for queries, keys, and values. This involves introducing three weight matrices:
The query is the data to use like before, while the keys and values matrices are just random-trainable matrices.
Step 1: Compute Queries, Keys, and Values
Each token will have its own query, key and value matrix by multiplying its dimension values by the defined matrices:
These matrices transform the original embeddings into a new space suitable for computing attention.
Example
Assuming:
Input dimension
din=3
(embedding size)Output dimension
dout=2
(desired dimension for queries, keys, and values)
Initialize the weight matrices:
Compute queries, keys, and values:
Step 2: Compute Scaled Dot-Product Attention
Compute Attention Scores
Similar to the example from before, but this time, instead of using the values of the dimensions of the tokens, we use the key matrix of the token (calculated already using the dimensions):. So, for each query qi
and key kj
:
Scale the Scores
To prevent the dot products from becoming too large, scale them by the square root of the key dimension dk
:
The score is divided by the square root of the dimensions because dot products might become very large and this helps to regulate them.
Apply Softmax to Obtain Attention Weights: Like in the initial example, normalize all the values so they sum 1.
Step 3: Compute Context Vectors
Like in the initial example, just sum all the values matrices multiplying each one by its attention weight:
Code Example
Grabbing an example from https://github.com/rasbt/LLMs-from-scratch/blob/main/ch03/01_main-chapter-code/ch03.ipynb you can check this class that implements the self-attendant functionality we talked about:
Note that instead of initializing the matrices with random values, nn.Linear
is used to mark all the wights as parameters to train.
Causal Attention: Hiding Future Words
For LLMs we want the model to consider only the tokens that appear before the current position in order to predict the next token. Causal attention, also known as masked attention, achieves this by modifying the attention mechanism to prevent access to future tokens.
Applying a Causal Attention Mask
To implement causal attention, we apply a mask to the attention scores before the softmax operation so the reminding ones will still sum 1. This mask sets the attention scores of future tokens to negative infinity, ensuring that after the softmax, their attention weights are zero.
Steps
Compute Attention Scores: Same as before.
Apply Mask: Use an upper triangular matrix filled with negative infinity above the diagonal.
Apply Softmax: Compute attention weights using the masked scores.
Masking Additional Attention Weights with Dropout
To prevent overfitting, we can apply dropout to the attention weights after the softmax operation. Dropout randomly zeroes some of the attention weights during training.
A regular dropout is about 10-20%.
Code Example
Code example from https://github.com/rasbt/LLMs-from-scratch/blob/main/ch03/01_main-chapter-code/ch03.ipynb:
Extending Single-Head Attention to Multi-Head Attention
Multi-head attention in practical terms consist on executing multiple instances of the self-attention function each of them with their own weights so different final vectors are calculated.
Code Example
It could be possible to reuse the previous code and just add a wrapper that launches it several time, but this is a more optimised version from https://github.com/rasbt/LLMs-from-scratch/blob/main/ch03/01_main-chapter-code/ch03.ipynb that processes all the heads at the same time (reducing the number of expensive for loops). As you can see in the code, the dimensions of each token is divided in different dimensions according to the number of heads. This way if token have 8 dimensions and we want to use 3 heads, the dimensions will be divided in 2 arrays of 4 dimensions and each head will use one of them:
For another compact and efficient implementation you could use the torch.nn.MultiheadAttention
class in PyTorch.
Short answer of ChatGPT about why it's better to divide dimensions of tokens among the heads instead of having each head check all the dimensions of all the tokens:
While allowing each head to process all embedding dimensions might seem advantageous because each head would have access to the full information, the standard practice is to divide the embedding dimensions among the heads. This approach balances computational efficiency with model performance and encourages each head to learn diverse representations. Therefore, splitting the embedding dimensions is generally preferred over having each head check all dimensions.
LLM Architecture
LLM architecture example from https://github.com/rasbt/LLMs-from-scratch/blob/main/ch04/01_main-chapter-code/ch04.ipynb:
A high level representation can be observed in:
Input (Tokenized Text): The process begins with tokenized text, which is converted into numerical representations.
Token Embedding and Positional Embedding Layer: The tokenized text is passed through a token embedding layer and a positional embedding layer, which captures the position of tokens in a sequence, critical for understanding word order.
Transformer Blocks: The model contains 12 transformer blocks, each with multiple layers. These blocks repeat the following sequence:
Masked Multi-Head Attention: Allows the model to focus on different parts of the input text at once.
Layer Normalization: A normalization step to stabilize and improve training.
Feed Forward Layer: Responsible for processing the information from the attention layer and making predictions about the next token.
Dropout Layers: These layers prevent overfitting by randomly dropping units during training.
Final Output Layer: The model outputs a 4x50,257-dimensional tensor, where 50,257 represents the size of the vocabulary. Each row in this tensor corresponds to a vector that the model uses to predict the next word in the sequence.
Goal: The objective is to take these embeddings and convert them back into text. Specifically, the last row of the output is used to generate the next word, represented as "forward" in this diagram.
Code representation
Let's explain it step by step:
GELU Activation Function
Purpose and Functionality
GELU (Gaussian Error Linear Unit): An activation function that introduces non-linearity into the model.
Smooth Activation: Unlike ReLU, which zeroes out negative inputs, GELU smoothly maps inputs to outputs, allowing for small, non-zero values for negative inputs.
Mathematical Definition:
The goal of the use of this function after linear layers inside the FeedForward layer is to change the linear data to be none linear to allow the model to learn complex, non-linear relationships.
FeedForward Neural Network
Shapes have been added as comments to understand better the shapes of matrices:
Purpose and Functionality
Position-wise FeedForward Network: Applies a two-layer fully connected network to each position separately and identically.
Layer Details:
First Linear Layer: Expands the dimensionality from
emb_dim
to4 * emb_dim
.GELU Activation: Applies non-linearity.
Second Linear Layer: Reduces the dimensionality back to
emb_dim
.
As you can see, the Feed Forward network uses 3 layers. The first one is a linear layer that will multiply the dimensions by 4 using linear weights (parameters to train inside the model). Then, the GELU function is used in all those dimensions to apply none-linear variations to capture richer representations and finally another linear layer is used to get back to the original size of dimensions.
Multi-Head Attention Mechanism
This was already explained in an earlier section.
Purpose and Functionality
Multi-Head Self-Attention: Allows the model to focus on different positions within the input sequence when encoding a token.
Key Components:
Queries, Keys, Values: Linear projections of the input, used to compute attention scores.
Heads: Multiple attention mechanisms running in parallel (
num_heads
), each with a reduced dimension (head_dim
).Attention Scores: Computed as the dot product of queries and keys, scaled and masked.
Masking: A causal mask is applied to prevent the model from attending to future tokens (important for autoregressive models like GPT).
Attention Weights: Softmax of the masked and scaled attention scores.
Context Vector: Weighted sum of the values, according to attention weights.
Output Projection: Linear layer to combine the outputs of all heads.
The goal of this network is to find the relations between tokens in the same context. Moreover, the tokens are divided in different heads in order to prevent overfitting although the final relations found per head are combined at the end of this network.
Moreover, during training a causal mask is applied so later tokens are not taken into account when looking the specific relations to a token and some dropout is also applied to prevent overfitting.
Layer Normalization
Purpose and Functionality
Layer Normalization: A technique used to normalize the inputs across the features (embedding dimensions) for each individual example in a batch.
Components:
eps
: A small constant (1e-5
) added to the variance to prevent division by zero during normalization.scale
andshift
: Learnable parameters (nn.Parameter
) that allow the model to scale and shift the normalized output. They are initialized to ones and zeros, respectively.
Normalization Process:
Compute Mean (
mean
): Calculates the mean of the inputx
across the embedding dimension (dim=-1
), keeping the dimension for broadcasting (keepdim=True
).Compute Variance (
var
): Calculates the variance ofx
across the embedding dimension, also keeping the dimension. Theunbiased=False
parameter ensures that the variance is calculated using the biased estimator (dividing byN
instead ofN-1
), which is appropriate when normalizing over features rather than samples.Normalize (
norm_x
): Subtracts the mean fromx
and divides by the square root of the variance pluseps
.Scale and Shift: Applies the learnable
scale
andshift
parameters to the normalized output.
The goal is to ensure a mean of 0 with a variance of 1 across all dimensions of the same token . The goal of this is to stabilize the training of deep neural networks by reducing the internal covariate shift, which refers to the change in the distribution of network activations due to the updating of parameters during training.
Transformer Block
Shapes have been added as comments to understand better the shapes of matrices:
Purpose and Functionality
Composition of Layers: Combines multi-head attention, feedforward network, layer normalization, and residual connections.
Layer Normalization: Applied before the attention and feedforward layers for stable training.
Residual Connections (Shortcuts): Add the input of a layer to its output to improve gradient flow and enable training of deep networks.
Dropout: Applied after attention and feedforward layers for regularization.
Step-by-Step Functionality
First Residual Path (Self-Attention):
Input (
shortcut
): Save the original input for the residual connection.Layer Norm (
norm1
): Normalize the input.Multi-Head Attention (
att
): Apply self-attention.Dropout (
drop_shortcut
): Apply dropout for regularization.Add Residual (
x + shortcut
): Combine with the original input.
Second Residual Path (FeedForward):
Input (
shortcut
): Save the updated input for the next residual connection.Layer Norm (
norm2
): Normalize the input.FeedForward Network (
ff
): Apply the feedforward transformation.Dropout (
drop_shortcut
): Apply dropout.Add Residual (
x + shortcut
): Combine with the input from the first residual path.
The transformer block groups all the networks together and applies some normalization and dropouts to improve the training stability and results. Note how dropouts are done after the use of each network while normalization is applied before.
Moreover, it also uses shortcuts which consists on adding the output of a network with its input. This helps to prevent the vanishing gradient problem by making sure that initial layers contribute "as much" as the last ones.
GPTModel
Shapes have been added as comments to understand better the shapes of matrices:
Purpose and Functionality
Embedding Layers:
Token Embeddings (
tok_emb
): Converts token indices into embeddings. As reminder, these are the weights given to each dimension of each token in the vocabulary.Positional Embeddings (
pos_emb
): Adds positional information to the embeddings to capture the order of tokens. As reminder, these are the weights given to token according to it's position in the text.
Dropout (
drop_emb
): Applied to embeddings for regularisation.Transformer Blocks (
trf_blocks
): Stack ofn_layers
transformer blocks to process embeddings.Final Normalization (
final_norm
): Layer normalization before the output layer.Output Layer (
out_head
): Projects the final hidden states to the vocabulary size to produce logits for prediction.
The goal of this class is to use all the other mentioned networks to predict the next token in a sequence, which is fundamental for tasks like text generation.
Note how it will use as many transformer blocks as indicated and that each transformer block is using one multi-head attestation net, one feed forward net and several normalizations. So if 12 transformer blocks are used, multiply this by 12.
Moreover, a normalization layer is added before the output and a final linear layer is applied a the end to get the results with the proper dimensions. Note how each final vector has the size of the used vocabulary. This is because it's trying to get a probability per possible token inside the vocabulary.
Number of Parameters to train
Having the GPT structure defined it's possible to find out the number of parameters to train:
Step-by-Step Calculation
1. Embedding Layers: Token Embedding & Position Embedding
Layer:
nn.Embedding(vocab_size, emb_dim)
Parameters:
vocab_size * emb_dim
Layer:
nn.Embedding(context_length, emb_dim)
Parameters:
context_length * emb_dim
Total Embedding Parameters
2. Transformer Blocks
There are 12 transformer blocks, so we'll calculate the parameters for one block and then multiply by 12.
Parameters per Transformer Block
a. Multi-Head Attention
Components:
Query Linear Layer (
W_query
):nn.Linear(emb_dim, emb_dim, bias=False)
Key Linear Layer (
W_key
):nn.Linear(emb_dim, emb_dim, bias=False)
Value Linear Layer (
W_value
):nn.Linear(emb_dim, emb_dim, bias=False)
Output Projection (
out_proj
):nn.Linear(emb_dim, emb_dim)
Calculations:
Each of
W_query
,W_key
,W_value
:Since there are three such layers:
Output Projection (
out_proj
):Total Multi-Head Attention Parameters:
b. FeedForward Network
Components:
First Linear Layer:
nn.Linear(emb_dim, 4 * emb_dim)
Second Linear Layer:
nn.Linear(4 * emb_dim, emb_dim)
Calculations:
First Linear Layer:
Second Linear Layer:
Total FeedForward Parameters:
c. Layer Normalizations
Components:
Two
LayerNorm
instances per block.Each
LayerNorm
has2 * emb_dim
parameters (scale and shift).
Calculations:
d. Total Parameters per Transformer Block
Total Parameters for All Transformer Blocks
3. Final Layers
a. Final Layer Normalization
Parameters:
2 * emb_dim
(scale and shift)
b. Output Projection Layer (out_head
)
Layer:
nn.Linear(emb_dim, vocab_size, bias=False)
Parameters:
emb_dim * vocab_size
4. Summing Up All Parameters
Generate Text
Having a model that predicts the next token like the one before, it's just needed to take the last token values from the output (as they will be the ones of the predicted token), which will be a value per entry in the vocabulary and then use the softmax
function to normalize the dimensions into probabilities that sums 1 and then get the index of the of the biggest entry, which will be the index of the word inside the vocabulary.
Code from https://github.com/rasbt/LLMs-from-scratch/blob/main/ch04/01_main-chapter-code/ch04.ipynb:
Last updated