Building Large Language Models from Scratch: Design, Train, and Deploy LLMs with PyTorch First Edition

★★★★★ 4.8 28 reviews

$51.24
Price when purchased online
Free shipping Free 30-day returns

Sold and shipped by democodigos.pollafutbol.co
We aim to show you accurate product information. Manufacturers, suppliers and others provide what you see here.
$51.24
Price when purchased online
Free shipping Free 30-day returns

How do you want your item?
You get 30 days free! Choose a plan at checkout.
Shipping
Arrives May 7
Free
Pickup
Check nearby
Delivery
Not available

Sold and shipped by democodigos.pollafutbol.co
Free 30-day returns Details

Product details

Management number 219248963 Release Date 2026/05/03 List Price $20.50 Model Number 219248963
Category

This book is a complete, hands-on guide to designing, training, and deploying your own Large Language Models (LLMs)—from the foundations of tokenization to the advanced stages of fine-tuning and reinforcement learning. Written for developers, data scientists, and AI practitioners, it bridges core principles and state-of-the-art techniques, offering a rare, transparent look at how modern transformers truly work beneath the surface.Starting from the essentials, you’ll learn how to set up your environment with Python and PyTorch, manage datasets, and implement critical fundamentals such as tensors, embeddings, and gradient descent. You’ll then progress through the architectural heart of modern models, covering RMS normalization, rotary positional embeddings (RoPE), scaled dot-product attention, Grouped Query Attention (GQA), Mixture of Experts (MoE), and SwiGLU activations, each explored in depth and built step by step in code. As you advance, the book introduces custom CUDA kernel integration, teaching you how to optimize key components for speed and memory efficiency at the GPU level—an essential skill for scaling real-world LLMs. You’ll also gain mastery over the phases of training that define today’s leading models:Pretraining - Building general linguistic and semantic understanding.Midtraining - Expanding domain-specific capabilities and adaptability.Supervised Fine-Tuning (SFT) - Aligning behavior with curated, task-driven data.Reinforcement Learning from Human Feedback (RLHF) - Refining responses through reward-based optimization for human alignment.The final chapters guide you through dataset preparation, filtering, deduplication, and training optimization, culminating in model evaluation and real-world prompting with a custom TokenGenerator for text generation and inference.By the end of this book, you’ll have the knowledge and confidence to architect, train, and deploy your own transformer-based models, equipped with both the theoretical depth and practical expertise to innovate in the rapidly evolving world of AI.What You’ll LearnHow to configure and optimize your development environment using PyTorchThe mechanics of tokenization, embeddings, normalization, and attention mechanisms.How to implement transformer components like RMSNorm, RoPE, GQA, MoE, and SwiGLU from scratch.How to integrate custom CUDA kernels to accelerate transformer computations.The full LLM training pipeline: pretraining, midtraining, supervised fine-tuning, and RLHF.Techniques for dataset preparation, deduplication, model debugging, and GPU memory management.How to train, evaluate, and deploy a complete GPT-like architecture for real-world tasks.Who this book is for:Software developers, data scientists, machine learning engineers and AI enthusiasts looking to build their models from scratch. Read more

ISBN13 979-8868822964
Edition First Edition
Language English
Publisher Apress
Dimensions 7.01 x 10 x 1.14 inches
Item Weight 2.28 pounds
Print length 555 pages
Publication date April 28, 2026

Correction of product information

If you notice any omissions or errors in the product information on this page, please use the correction request form below.

Correction Request Form

Customer ratings & reviews

4.8 out of 5
★★★★★
28 ratings | 11 reviews
How item rating is calculated
View all reviews
5 stars
87% (24)
4 stars
2% (1)
3 stars
1% (0)
2 stars
0% (0)
1 star
10% (3)
Sort by

There are currently no written reviews for this product.