Autoplay
Autocomplete
Previous Lesson
Complete and Continue
Building Apps with Large Language Models
Market Opportunities in for Generative AI
Overview
Types of LLM and Generative AI Solutions
Trends in Competition and Billing
Impact in Cloud Computing and Business Applications
Adoption Across Industry, Sectors, and Types of Users
Assessment: A List of Generative AI Platforms (1:59)
A Primer on Transformers
Introduction to Transformers and Attention
Primer on Attention
Transfer Learning in Large Language Models
Open-Source Transformer Development
Challenges with Transformers
Transformer Architecture
Introduction to the Transformer Architecture
Encoder Components: Self-Attention
The Feed-forward Layer
Encoder Components: Positional Embeddings
Encoder Components: Multi-headed Attention
Encoder Components: A Task-Specific Head
The Decoder
Text Generation
The Challenges of Text Generation
Greedy Search
Beam Search
Sampling Methods
Prototyping an App with the OpenAI API
Create Bubble app, create API prompt in OpenAI
Set up API connector
Set up interface in Bubble for text generation
Set up workflow in Bubble to interface with OpenAI and generate results
Content Moderation API with Text Generation
Prototyping a RAG-based Personal Knowledgebase with OpenAI and Pinecone
Overview of the Prototype
Notes-Saver: Store the text data in a vector database
Notes-Getter: Answer queries through OpenAI API using vector database
Notes-Injector: Store already collected text data in a vector database
Prompting Best Practices
Understanding Prompts in Large Language Models
Instruction-based Fine Tuning and Prompting Strategy
Reinforcement Learning with Human Feedback and Prompting Strategy
Zero-Shot Prompting
Adding a System Prompt
Few-Shot Prompting
Chain-of-Thought Prompting
Self-consistency
Tree-of-Thought Prompting
Prompting Design Patterns
Assessment: Retrieval Augmented Generation
Encoder Components: Multi-headed Attention
Lesson content locked
If you're already enrolled,
you'll need to login
.
Enroll in Course to Unlock