Open Source Generative AI
Retail Price: $2,595.00
Next Date: 02/03/2025
Course Days: 3
Enroll in Next Date
Request Custom Course
What You’ll Learn
• Train and optimize Transformer models with PyTorch.
• Master advanced prompt engineering.
• Understand AI architecture, especially Transformers.
• Write a real-world AI web application.
• Describe tokenization and word embeddings.
• Install and use frameworks like Llama-2.
• Apply strategies to maximize model performance.
• Explore model quantization and fine-tuning.
• Compare CPU vs. GPU hardware acceleration.
• Understand chat vs. instruct interaction modes.
Who Should Attend
• Project Managers
• Architects
• Developers
• Data Acquisition Specialists
Prerequisites
• Python - PCEP Certification or Equivalent Experience
• Familiarity with Linux
Outline
Deep Learning Intro
• Lecture: What is Intelligence?
• Lecture: Generative AI Unveiled
• Lecture: The Transformer Model
• Lecture: Feed Forward Neural Networks
• Lecture + Lab: Tokenization
• Lecture + Lab: Word embeddings
• Lecture + Lab: Positional Encoding
Build a Transformer Model from Scratch
• Lecture: PyTorch
• Lecture + Lab: Construct a Tensor from a Dataset
• Lecture + Lab: Orchestrate Tensors in Blocks and Batches
• Lecture + Lab: Initialize PyTorch Generator Function
• Lecture + Lab: Train the Transformer Model
• Lecture + Lab: Apply Positional Encoding and Self-Attention
• Lecture + Lab: Attach the Feed Forward Neural Network
• Lecture + Lab: Build the Decoder Block
• Lecture + Lab: Transformer Model as Code
Prompt Engineering
• Lecture: Introduction to Prompt Engineering
• Lecture + Lab: Getting Started with Gemini
• Lecture + Lab: Developing Basic Prompts
• Lecture + Lab: Intermediate Prompts: Define Task/Inputs/Outputs/Constraints/Style
• Lecture + Lab: Advanced Prompts: Chaining, Set Role, Feedback, Examples
Hardware requirements
• Lecture: GPUs role in AI performance (CPU vs GPU)
• Lecture: Current GPUs and cost vs value
• Lecture: Tensorcore vs older GPU architectures
Pre-trained LLM
• Lecture: A History of Neural Network Architectures
• Lecture: Introduction to the LLaMa.cpp Interface
• Lecture: Preparing A100 for Server Operations
• Lecture + Lab: Operate LLaMa2 Models with LLaMa.cpp
• Lecture + Lab: Selecting Quantization Level to Meet Performance and Perplexity Requirements
• Lecture: Running the llama.cpp Package
• Lecture + Lab: Llama interactive mode
• Lecture + Lab: Persistent Context with Llama
• Lecture + Lab: Constraining Output with Grammars
• Lecture + Lab: Deploy Llama API Server
• Lecture + Lab: Develop LLaMa Client Application
• Lecture + Lab: Write a Real-World AI Application using the Llama API
Fine Tuning
• Lecture + Lab: Using PyTorch to fine tune models
• Lecture + Lab: Advanced Prompt Engineering Techniques
Testing and Pushing Limits
• Lecture + Lab: Maximizing Model Limits