Previous Lecture
Complete and continue
Building a Large Language Model From Scratch
Introduction to Large Language Models
Introduction to Large Language Models
Setting up Environment for the Course.
Environment Setup for the Course
Python Setup Preferences
Installing Python Packages and Libraries
Working with Text Embeddings
Working with Text Embeddings
The Main Data Loading Pipeline Summarized
Comparing Various Byte Pair Encoding (BPE) Implementations
Embedding Layers V/s Linear Layers
Data sampling with a sliding window with number data
Coding Attention Mechanisms for Large Language Model
Coding Attention Mechanisms
Multi-head Attention Plus Data Loading
Comparing Efficient Multi-Head Attention Implementations
Understanding PyTorch Buffers
Implementing a GPT Large Language Model from Scratch
Implementing a GPT model from Scratch
Parameters in the feed forward versus attention module
FLOPS Analysis for Large Language Model
Pretraining Large Language Model on Unlabeled Data
Pretraining on Unlabeled Data
Temperature-scaled softmax scores and sampling probabilities
Alternative Weight Loading from Hugging Face Model Hub using Transformers
Pretraining GPT on the Project Gutenberg Dataset
Optimizing Hyperparameters for Pretraining
Building a User Interface to Interact With the Pretrained LLM
Converting a From-Scratch GPT Architecture to Llama 2
Converting Llama 2 to Llama 3.2 From Scratch
Finetuning a Large Language Model for Classification
Finetuning for Text Classification
Load And Use Finetuned Model
Increasing the context length
Additional Classification Finetuning Experiments
Scikit-learn Logistic Regression Model
Building a User Interface to Interact With the GPT-based Spam Classifier
Finetuning a Large Language Model to Follow Instructions
Finetuning a Large Language ModelTo Follow Instructions
Load And Use Finetuned Model
Changing prompt styles
Create "Passive Voice" Entries for an Instruction Dataset
Evaluating Instruction Responses Locally Using a Llama 3 Model Via Ollama
Evaluating Instruction Responses Using the OpenAI API
Generating A Preference Dataset With Llama 3.2 And Ollama
Direct Preference Optimization (DPO) for LLM Alignment (From Scratch)
Improving Instruction-Data Via Reflection-Tuning Using GPT-4
Building a User Interface to Interact With the Instruction Finetuned GPT Model
Scikit-learn Logistic Regression Model
Lecture content locked
If you're already enrolled,
you'll need to login.
Enroll in Course to Unlock
Log In
×
E-Mail Address
Password
Remember Me
Login
Forgot Your Password?