What you will learn

  • Learn how to apply effective prompting techniques with best practices

  • Develop a systematic framework for prompting and building with LLMs

  • Learn from common use cases how to best apply prompt engineering techniques

Course curriculum

    1. Introduction to Prompt Engineering

    2. About the Instructor

    3. Course Objectives

    4. Course Structure

    5. The tools and environment

    6. Setting up your Playground

    7. Setting up the OpenAI Playground

    1. What are LLMs?

    2. Base LLM vs. Instruction-Tuned LLM

    3. LLMs and LLM Providers

    4. Chat LLMs

    5. Chat LLM Common Use Cases

    6. How to Leverage LLMs?

    7. Quiz

    1. What is Prompt Engineering?

    2. Why Prompt Engineering?

    3. Elements of a Prompt

    4. First Basic Prompt

    5. Quiz

    1. Introduction to the OpenAI Playground

    2. OpenAI Playground - Roles

    3. OpenAI Playground - Temperature

    4. OpenAI Playground - Text Classification

    5. OpenAI Playground - Role Playing

    6. Exercise 1: Getting Started with OpenAI Playground

    7. Exercise 2: Text Summarization

    1. What makes a good prompt?

    2. Be clear and specific when prompting

    3. Using delimiters

    4. Specifying output length

    5. Output format

    6. Split Complex Tasks into Subtasks

    1. Introduction to Few-shot prompting

    2. How many demonstrations?

    3. Tips for preparing demonstrations

    4. Quiz

About this course

  • 45 lessons
  • Projects to apply learnings
  • Earn a Certificate of Completion
  • Beginner

Instructor(s)

Martin Szummer, Ph.D.

Lead Instructor

Martin Szummer is a course instructor in machine learning with two decades of experience at Google DeepMind, Microsoft Research, MIT, and the University of Cambridge. He has published award-winning research spanning deep learning, kernel methods, and Bayesian methods. At Microsoft, he pioneered algorithms that increased ad revenues by four million dollars in just two months, and at DeepMind, he developed causal machine learning approaches to optimize user engagement long-term. Prior to this, he co-founded and served as CTO of a startup building self-learning voice interfaces, leading product vision, engineering, and fundraising.

More about this course

OVERVIEW

This course focuses on key prompt engineering techniques for large language models (LLMs) and how to effectively apply them in various scenarios and use cases. After completing this course, students will have a clear and systematic framework for how to effectively and efficiently prompt LLMs to enable a variety of tasks and use cases. 

PREREQUISITES

This course doesn't have any prerequisites. The main tool you will use is the OpenAI Playground, therefore, no programming is required. You will need to create a paid account using OpenAI. More details and instructions are provided in the course.

TOPICS

Throughout the course, students will utilize the OpenAI Playground, to design and optimize their prompts for several use cases. 

Key concepts covered in the course include:

  • Introduction to LLMs: Learn the fundamentals of Large Language Models (LLMs), including their core types, applications, and practical implementation strategies. This course covers everything from basic concepts to hands-on usage of chat LLMs, preparing you to effectively leverage LLMs in real-world scenarios.
  • Introduction to Prompt Engineering: Master the skill of designing effective LLM prompts through this foundational course in Prompt Engineering. Learn what makes an effective prompt, why it matters, and how to write your first prompts to get optimal results from LLMs.
  • The OpenAI Playground: Explore OpenAI's Playground interface and learn to control an LLM's behavior through hands-on exercises. Students will learn and apply essential topics including roles, temperature settings, role-playing, and text classification.
  • Improving Prompts: Elevate your prompt writing skills by learning the key elements of effective prompts. This module covers best practices for clarity, using delimiters, controlling output length, and formatting outputs - essential techniques for getting consistent, high-quality responses from LLMs.
  • Few-shot Prompting: Master the technique of few-shot prompting to improve LLM performance through examples. Learn how to effectively use demonstrations in your prompts, determine the optimal number of examples, and prepare them for the best results.
  • Use Case - Information Extraction: Learn practical applications of prompt engineering for extracting structured information from text. This module covers how to apply both zero-shot and few-shot approaches to help you efficiently pull specific data from various content types.
  • Chain-of-Thought Prompting: Discover how to guide LLMs through complex reasoning using Chain-of-Thought prompting. Practice this powerful technique through hands-on exercises, including a practical case study on movie recommendations, followed by a quiz to test your understanding.