Demystifying Large Language Models

Demystifying Large Language Models

This book is a comprehensive guide aiming to demystify the world of transformers -- the architecture that powers Large Language Models (LLMs) like GPT and BERT. From PyTorch basics and mathematical foundations to implementing a Transformer from scratch, you'll gain a deep understanding of the inner workings of these models.

That's just the beginning. Get ready to dive into the realm of pre-training your own Transformer from scratch, unlocking the power of transfer learning to fine-tune LLMs for your specific use cases, exploring advanced techniques like PEFT (Prompting for Efficient Fine-Tuning) and LoRA (Low-Rank Adaptation) for fine-tuning, as well as RLHF (Reinforcement Learning with Human Feedback) for detoxifying LLMs to make them aligned with human values and ethical norms.

Step into the deployment of LLMs, delivering these state-of-the-art language models into the real-world, whether integrating them into cloud platforms or optimizing them for edge devices, this section ensures you're equipped with the know-how to bring your AI solutions to life.

Whether you're a seasoned AI practitioner, a data scientist, or a curious developer eager to advance your knowledge on the powerful LLMs, this book is your ultimate guide to mastering these cutting-edge models. By translating convoluted concepts into understandable explanations and offering a practical hands-on approach, this treasure trove of knowledge is invaluable to both aspiring beginners and seasoned professionals.

Table of Contents

  1. INTRODUCTION

1.1 What is AI, ML, DL, Generative AI and Large Language Model

1.2 Lifecycle of Large Language Models

1.3 Whom This Book Is For

1.4 How This Book Is Organized

1.5 Source Code and Resources

  1. PYTORCH BASICS AND MATH FUNDAMENTALS

2.1 Tensor and Vector

2.2 Tensor and Matrix

2.3 Dot Product

2.4 Softmax

2.5 Cross Entropy

2.6 GPU Support

2.7 Linear Transformation

2.8 Embedding

2.9 Neural Network

2.10 Bigram and N-gram Models

2.11 Greedy, Random Sampling and Beam

2.12 Rank of Matrices

2.13 Singular Value Decomposition (SVD)

2.14 Conclusion

  1. TRANSFORMER

3.1 Dataset and Tokenization

3.2 Embedding

3.3 Positional Encoding

3.4 Layer Normalization

3.5 Feed Forward

3.6 Scaled Dot-Product Attention

3.7 Mask

3.8 Multi-Head Attention

3.9 Encoder Layer and Encoder

3.10 Decoder Layer and Decoder

3.11 Transformer

3.12 Training

3.13 Inference

3.14 Conclusion

  1. PRE-TRAINING

4.1 Machine Translation

4.2 Dataset and Tokenization

4.3 Load Data in Batch

4.4 Pre-Training nn.Transformer Model

4.5 Inference

4.6 Popular Large Language Models

4.7 Computational Resources

4.8 Prompt Engineering and In-context Learning (ICL)

4.9 Prompt Engineering on FLAN-T5

4.10 Pipelines

4.11 Conclusion

  1. FINE-TUNING

5.1 Fine-Tuning

5.2 Parameter Efficient Fine-tuning (PEFT)

5.3 Low-Rank Adaptation (LoRA)

5.4 Adapter

5.5 Prompt Tuning

5.6 Evaluation

5.7 Reinforcement Learning

5.8 Reinforcement Learning Human Feedback (RLHF)

5.9 Implementation of RLHF

5.10 Conclusion

  1. DEPLOYMENT OF LLMS

6.1 Challenges and Considerations

6.2 Pre-Deployment Optimization

6.3 Security and Privacy

6.4 Deployment Architectures

6.5 Scalability and Load Balancing

6.6 Compliance and Ethics Review

6.7 Model Versioning and Updates

6.8 LLM-Powered Applications

6.9 Vector Database

6.10 LangChain

6.11 Chatbot, Example of LLM-Powered Application

6.12 WebUI, Example of LLM-Power Application

6.13 Future Trends and Challenges

6.14 Conclusion

REFERENCES

ABOUT THE AUTHOR


Auteur | James Chen
Taal | Engels
Type | E-book
Categorie | Computers & Informatica

bol logo

Kijk verder

Boekomslag voor ISBN: 9798868800177
Boekomslag voor ISBN: 9781805123743
Boekomslag voor ISBN: 9781801078894
Boekomslag voor ISBN: 9781484295021
Boekomslag voor ISBN: 9781800568631


Boekn ©