Deep Dive into LlaMA 3 by Hand ✍️

Explore the nuances of the transformer architecture behind Llama 3 and its prospects for the GenAI ecosystem

Srijanie Dey, PhD
Towards Data Science

--

Image by author (The shining LlaMA 3 rendition by my 4-year old.)

“In the rugged mountain of the Andes, lived three very beautiful creatures — Rio, Rocky and Sierra. With their lustrous coat and sparkling eyes, they stood out as a beacon of strength and resilience.

As the story goes, it was said that from a very young age their thirst for knowledge was never-ending. They would seek out the wise elders of their herd, listening intently to their stories and absorbing their wisdom like a sponge. With that grew their superpower which was working together with others and learning that teamwork was the key to acing the trials in the challenging terrain of the Andes.

If they encountered travelers who had lost their way or needed help, Rio took in their perspective and led them with comfort, Rocky provided swift solutions while Sierra made sure they had the strength to carry on. And with this they earned admiration and encouraged everyone to follow their example.

As the sun set over the Andes, Rio, Rocky, and Sierra stood together, their spirits intertwined like the mountains themselves. And so, their story lived on as a testament to the power of knowledge, wisdom and collaboration and the will to make a difference.

They were the super-Llamas and the trio was lovingly called LlaMA3!”

LlaMA 3 by Meta

And this story is not very far from the story of Meta’s open-source Large Language Model (LLM) — LlaMA 3 (Large Language Model Meta AI). On April 18, 2024, Meta released their LlaMa 3 family of large language models in 8B and 70B parameter sizes, claiming a major leap over LlaMA 2 and vying for the best state-of-the-art LLM models at that scale.

According to Meta, there were four key focus points while building LlaMA 3 — the model architecture, the pre-training data, scaling up pre-training, and instruction fine-tuning. This leads us to ponder what we can do to reap the most out of this very competent model — on an enterprise scale as well as at the grass-root level.

To help explore the answers to some of these questions, I collaborated with Edurado Ordax, Generative AI Lead at AWS and Prof. Tom Yeh, CS Professor at University of Colorado, Boulder.

So, let’s start the trek:

How can we leverage the power of LlaMA 3?

API vs Fine-Tuning

As per the recent practices, there are two main ways by which these LLMs are being accessed and worked with — API and Fine-Tuning. Even with those two very diverse approaches there are other factors in the process, as can be seen in the following images, that become crucial.

(All images in this section are courtesy to Eduardo Ordax.)

There are mainly 6 stages of how a user can interact with LlaMA 3.

Stage 1 : Cater to a broad-case usage by using the model as is.

Stage 2 : Use the model as per a user-defined application.

Stage 3 : Use prompt-engineering to train the model to produce the desired outputs.

Stage 4 : Use prompt-engineering on the user side along with delving a bit into data retrieval and fine-tuning which is still mostly managed by the LLM provider.

Stage 5 : Take most of the matters in your own hand (the user), starting from prompt-engineering to data retrieval and fine-tuning (RAG models, PEFT models and so on).

Stage 6 : Create the entire foundational model starting from scratch — pre-training to post-training.

To gain the most out of these models, it is suggested that the best approach would be entering Stage 5 because then the flexibility lies a lot with the user. Being able to customize the model as per the domain-need is crucial in order to maximize its gains. And for that, not getting involved into the systems does not yield optimal returns.

To be able to do so, here is a high-level picture of the tools that could prove to be useful:

The picture dictates that in order to get the highest benefit from the models, a set structure and a road map is essential. There are three components to it:

  1. People: Not just end-users, but the whole range of data engineers, data scientists, MLOps Engineers, ML Engineers along with Prompt Engineers are important.
  2. Process: Not just plugging in the LLM into an API but focusing on the entire lifecycle of model evaluation, model deployment and fine-tuning to cater to specific needs.
  3. Tools: Not just the API access and API tools but the entire range of environments, different ML pipelines, separate accounts for access and running checks.

Of course, this is true for an enterprise-level deployment such that the actual benefits of the model can be reaped. And to be able to do so, the tools and practices under MLOps become very important. Combined with FMOps, these models can prove to be very valuable and enrich the GenAI ecosystem.

FMOps ⊆ MLOps ⊆ DevOps

MLOps also known as Machine Learning Operations is a part of Machine Learning Engineering that focuses on the development as well as the deployment, and maintenance of ML models ensuring that they run reliably and efficiently.

MLOps fall under DevOps (Development and Operations) but specifically for ML models.

FMOps (Foundational Model Operations) on the other hand work for Generative AI scenarios by selecting, evaluating and fine-tuning the LLMs.

With all if it being said, one thing however remains constant. And that is the fact that LlaMA 3 is after all an LLM and its implementation on the enterprise-level is possible and beneficial only after the foundational elements are set and validated with rigor. To be able to do so, let us explore the technical details behind LlaMA 3.

What is the secret sauce toward LlaMa 3’s claim to fame?

At the fundamental level, yes, it is the transformer. If we go a little higher up in the process, the answer would be the transformer architecture but highly optimized to achieve superior performance on the common industry benchmarks while also enabling newer capabilities.

Good news is that since LlaMa 3 is open (open-source at Meta’s discretion), we have access to the Model Card that gives us the details to how this powerful architecture is configured.

So, let’s dive in and unpack the goodness:

How does the transformer architecture coupled with self-attention play its role in LlaMA 3?

To start with, here is a quick review on how the transformer works:

  1. The transformer architecture can be perceived as a combination of the attention layer and the feed-forward layer.
  2. The attention layer combines across features horizontally to produce a new feature.
  3. The feed-forward layer (FFN) combines the parts or the characteristics of a feature to produce new parts/characteristics. It does it vertically across dimensions.

(All the images in this section, unless otherwise noted, are by Prof. Tom Yeh, which I have edited with his permission.)

Below is a basic form of how the architecture looks like and how it functions.

The transformer architecture containing the attention and the feed-forward blocks.

Here are the links to the deep-dive articles for Transformers and Self-Attention where the entire process is discussed in detail.

The essentials of LlaMA 3

It’s time to get into the nitty-gritty and discover how the transformer numbers play out in the real-life LlaMa 3 model. For our discussion, we will only consider the 8B variant. Here we go:

- What are the LlaMA 3 — 8B model parameters?

The primary numbers/values that we need to explore here are for the parameters that play a key role in the transformer architecture. And they are as below:

  • Layers : Layers here refer to the basic blocks of the transformers — the attention layer and the FFN as can be seen in the image above. The layers are stacked one above the other where the input flows into one layer and its output is passed on to the next layer, gradually transforming the input data.
  • Attention heads : Attention heads are part of the self-attention mechanism. Each head scans the input sequence independently and performs the attention steps (Remember: the QK-module, SoftMax function.)
  • Vocabulary words : The vocabulary refers to the number of words the model recognizes or knows. Essentially, think of it as humans’ way of building our word repertoire so that we develop knowledge and versatility in a language. Most times bigger the vocabulary, better the model performance.
  • Feature dimensions : These dimensions specify the size of the vectors representing each token in the input data. This number remains consistent throughout the model from the input embedding to the output of each layer.
  • Hidden dimensions : These dimensions are the internal size of the layers within the model, more commonly the size of hidden layers of the feed-forward layers. As is norm, the size of these layers can be larger than the feature dimension helping the model extract and process more complex representations from the data.
  • Context-window size : The ‘window-size’ here refers to the number of tokens from the input sequence that the model considers at once when calculating attention.

With the terms defined, let us refer to the actual numbers for these parameters in the LlaMA 3 model. (The original source code where these numbers are stated can be found here.)

The original source code where these numbers are stated can be found here.

Keeping these values in mind, the next steps illustrate how each of them play their part in the model. They are listed in their order of appearance in the source-code.

[1] The context-window

While instantiating the LlaMa class, the variable max_seq_len defines the context-window. There are other parameters in the class but this one serves our purpose in relation to the transformer model. The max_seq_len here is 8K which implies the attention head is able to scan 8K tokens at one go.

[2] Vocabulary-size and Attention Layers

Next up is the Transformer class which defines the vocabulary size and the number of layers. Once again the vocabulary size here refers to the set of words (and tokens) that the model can recognize and process. Attention layers here refer to the transformer block (the combination of the attention and feed-forward layers) used in the model.

Based on these numbers, LlaMA 3 has a vocabulary size of 128K which is quite large. Additionally, it has 32 copies of the transformer block.

[3] Feature-dimension and Attention-Heads

The feature dimension and the attention-heads make their way into the Self-Attention module. Feature dimension refers to the vector-size of the tokens in the embedding space and the attention-heads consist of the QK-module that powers the self-attention mechanism in the transformers.

[4] Hidden Dimensions

The hidden dimension features in the Feed-Forward class specifying the number of hidden layers in the model. For LlaMa 3, the hidden layer is 1.3 times the size of the feature dimension. A larger number of hidden layers allows the network to create and manipulate richer representations internally before projecting them back to the smaller output dimension.

[5] Combining the above parameters to form the Transformer

  • The first matrix is the input feature matrix which goes through the Attention layer to create the Attention Weighted features. In this image the input feature matrix only has a size of 5 x 3 matrix, but in the real-world Llama 3 model it grows up to be 8K x 4096 which is enormous.
  • The next one is the hidden layer in the Feed-Forward Network that grows up to 5325 and then comes back down to 4096 in the final layer.

[6] Multiple-layers of the Transformer block

LlaMA 3 combines 32 of these above transformer blocks with the output of one passing down into the next block until the last one is reached.

[7] Let’s put it all together

Once we have set all the above pieces in motion, it is time to put it all together and see how they produce the LlaMA effect.

So, what is happening here?

Step 1 : First we have our input matrix, which is the size of 8K (context-window) x 128K (vocabulary-size). This matrix undergoes the process of embedding which takes this high-dimensional matrix into a lower dimension.

Step 2 : This lower dimension in this case turns out to be 4096 which is the specified dimension of the features in the LlaMA model as we had seen before. (A reduction from 128K to 4096 is immense and noteworthy.)

Step 3: This feature goes through the Transformer block where it is processed first by the Attention layer and then the FFN layer. The attention layer processes it across features horizontally whereas the FFN layer does it vertically across dimensions.

Step 4: Step 3 is repeated for 32 layers of the Transformer block. In the end the resultant matrix has the same dimension as the one used for the feature dimension.

Step 5: Finally this matrix is transformed back to the original size of the vocabulary matrix which is 128K so that the model can choose and map those words as available in the vocabulary.

And that’s how LlaMA 3 is essentially scoring high on those benchmarks and creating the LlaMA 3 effect.

The LlaMA 3 Effect

LlaMA 3 was released in two model versions — 8B and 70B parameters to serve a wide range of use-cases. In addition to achieving state-of-the-art performances on standard benchmarks, a new and rigorous human-evaluation set was also developed. And Meta promises to release better and stronger versions of the model with it becoming multilingual and multimodal. The news is newer and larger models are coming soon with over 400B parameters (early reports here show that it is already crushing benchmarks by an almost 20% score increase over LlaMA 3).

However, it is imperative to say that in spite of all the upcoming changes and all the updates, one thing is going to remain the same — the foundation of it all — the transformer architecture and the transformer block that enables this incredible technical advancement.

It could be a coincidence that LlaMA models were named so, but based on legend from the Andes mountains, the real llamas have always been revered for their strength and wisdom. Not very different from the Gen AI — ‘LlaMA’ models.

So, let’s follow along in this exciting journey of the GenAI Andes while keeping in mind the foundation that powers these large language models!

P.S. If you would like to work through this exercise on your own, here is a link to a blank template for your use.

Blank Template for hand-exercise

Now go have fun and create some LlaMA 3 effect!

Image by author

--

--