Module 1: Introduction to Prompt Engineering
Welcome to this first module dedicated to the fundamentals of prompt engineering. The objective is to provide you with a solid understanding of what this discipline is, its growing importance, and the central role it plays in our interaction with generative artificial intelligences.
Lesson 1.1: What is Prompt Engineering?
Prompt engineering is the art and science of designing and optimizing instructions, called "prompts," to guide language models (LLMs) toward generating precise, relevant, and useful responses. It's a discipline that sits at the intersection of linguistics, computer science, and creativity.
Prompt engineering, also called "query engineering," is a technique that consists of providing detailed instructions to natural language processing (NLP) models in order to improve their performance. [1]
In essence, a prompt engineer acts as a translator or mediator between human intention and machine logic. Rather than simply asking a question, it involves formulating it in the most effective way possible so that the model understands not only the explicit request, but also the context, the desired output format, and the constraints to be respected.
The emergence of this discipline is directly linked to the advent of large language models like GPT-3 and its successors. As these models became increasingly powerful, it became evident that the quality of their results depended crucially on the quality of the instructions provided. Prompt engineering was thus born from the necessity to fully exploit the potential of these technologies.
Lesson 1.2: The Role of the Prompt Engineer
The prompt engineer is a new profession whose contours are rapidly taking shape. Their main role is to ensure that the company gets the most out of the generative artificial intelligences it uses. This translates into several key missions.
Mission | Description |
---|---|
Prompt Design and Writing | Create clear, precise, and effective queries to generate content (text, image, code, etc.) that meets a specific need. |
Optimization and Iteration | Test, analyze, and refine prompts iteratively to improve the quality, reliability, and consistency of AI responses. |
Training and Model Improvement | Participate in the continuous improvement of AI models by identifying their biases, limitations, and creating datasets for their training. |
Training and Documentation | Write best practice guides, train other employees in the use of AI tools, and document the most effective prompts. |
To fulfill these missions, the prompt engineer must possess a varied set of skills, both technical and human.
Technical Skills:
- Understanding of LLMs: Deep knowledge of the functioning, strengths, and weaknesses of different language models.
- Natural Language Processing (NLP): Solid foundations in NLP to understand how models interpret language.
- Programming Languages: Mastery of languages like Python is often required to automate tasks and interact with model APIs.
- Data Analysis: The ability to analyze prompt performance quantitatively.
Human Skills (Soft Skills):
- Creativity and Curiosity: The imagination to explore new ways of formulating prompts.
- Critical and Analytical Thinking: The ability to break down a complex problem into simple instructions.
- Communication and Pedagogy: The aptitude to explain technical concepts and train non-expert users.
- Perseverance and Patience: Prompt optimization is a process that requires many trials and adjustments.
Lesson 1.3: Introduction to Large Language Models (LLMs)
Large language models are the engine of generative AI and the main tool of the prompt engineer. Understanding their functioning is essential for interacting effectively with them.
An LLM is a type of artificial neural network trained on very large amounts of textual data. The most common architecture today is the Transformer, introduced in 2017. This architecture allows the model to handle long-range dependencies in text and pay particular attention to certain words based on context (the "attention" mechanism).
The training process is generally done in two stages:
- Pre-training: The model learns to predict the next word in a sentence from billions of documents from the Internet (Wikipedia, books, articles, etc.). It's during this phase that it acquires general knowledge of the world and an understanding of grammar, syntax, and semantics.
- Fine-tuning: The model is then specialized on more specific tasks (translation, summarization, question answering) using more restricted and high-quality datasets, often with human supervision (such as Reinforcement Learning from Human Feedback - RLHF).
There is now a great variety of LLMs, developed by different companies and organizations. Here are some of the best known:
Model | Developer | Notable Characteristics |
---|---|---|
GPT (Generative Pre-trained Transformer) | OpenAI | One of the most advanced and popular model families, known for its excellent text generation and reasoning capabilities. |
Llama (Large Language Model Meta AI) | Meta | An open-source model family that has quickly gained popularity, fostering innovation and community research. |
Gemini | A family of native multimodal models, capable of processing and understanding text, images, audio, and video simultaneously. | |
Claude | Anthropic | Known for its emphasis on safety and ethics, with a "constitution" that guides its responses to be helpful, harmless, and honest. |
Each model has its own strengths, weaknesses, and "personality." Part of the prompt engineer's job is to understand these nuances to choose the right tool for the right task and adapt their prompts accordingly.
Module 2: Fundamentals of Prompting
Now that we've laid the foundations of what prompt engineering is, it's time to dive into the heart of practice. This module is dedicated to the fundamental elements that constitute an effective prompt. Mastering these basics is the sine qua non condition for being able to then approach more complex techniques.
Lesson 2.1: The Anatomy of an Effective Prompt
An effective prompt is not simply a question thrown at the model. It's a carefully constructed instruction that can contain several elements, each playing a specific role. While not all elements are necessary for every prompt, knowing them allows you to structure your thinking and build more robust queries.
A prompt can be broken down into four main components:
Component | Role | Example |
---|---|---|
Role (Persona) | Instruct the model on the "personality" or expertise it should adopt. | "Act as a digital marketing expert..." |
Instruction (Task) | Define the specific task the model must accomplish. | "...write a newsletter for the launch of a new product." |
Context | Provide background information, constraints, or data necessary to perform the task. | "The product is a meditation app for overworked professionals. The tone should be soothing but professional." |
Output Format | Specify the structure or format of the expected response. | "The newsletter should contain a catchy title, three paragraphs, and a call to action. All in Markdown format." |
By combining these elements, you go from a simple question like "write a newsletter" to a much richer and more directive prompt, which significantly increases the chances of getting a satisfactory result on the first try.
Lesson 2.2: Writing Best Practices
Beyond structure, the way the prompt is written has a major impact. Here are some of the recognized best practices, derived from the documentation of major AI labs and community experience.
1. Be Specific and Clear: Language models don't read minds. Ambiguity is their worst enemy. You must avoid vague descriptions and provide precise details about what you expect. For example, instead of saying "summarize this text," prefer "summarize this text in three key points, focusing on the financial implications."
2. Use Delimiters: To help the model clearly distinguish the different parts of your prompt (particularly to separate instructions from context or data), it's very effective to use delimiters. Triple quotes ('''), triple quotation marks ("""), or XML tags () are common choices.
'''{insert text to summarize here}'''
Summarize the text above in three sentences.
3. Give the Model Time to "Think": For complex tasks that require reasoning, forcing the model to give an immediate response can lead to errors. An effective technique is to ask it to detail its reasoning step by step before giving its final conclusion. This is the basis of the Chain-of-Thought technique that we'll see in more detail in the next module.
The instruction "Let's think step by step" added at the end of a prompt has proven remarkably effective at improving model performance on reasoning problems. [2]
4. Provide Examples (Few-Shot Prompting): If the task is new or complex, showing the model exactly what you expect through one or more examples (the "shots") is one of the most powerful techniques. This allows the model to understand the format, style, and level of detail expected.
Lesson 2.3: Basic Practical Exercises
The best way to learn is to practice. Here are some simple exercises to start applying the principles seen above. We encourage you to try them on the LLM of your choice.
Exercise 1: Simple Text Generation
- Task: Write a short email to invite a colleague to lunch.
- Simple prompt: "Write an email to invite Jean to lunch."
- Improved prompt: "Act as a friendly but professional colleague. Write a short email (less than 100 words) to invite my colleague, Jean, to lunch next week. Suggest that he choose the day and place. The tone should be informal. Sign with my name, Alex."
Exercise 2: Question-Answering
- Task: Get a simple explanation of photosynthesis.
- Simple prompt: "What is photosynthesis?"
- Improved prompt: "Explain the concept of photosynthesis as if you were addressing a 10-year-old child. Use a simple analogy. Your response should not exceed 150 words."
Exercise 3: Document Summary
Task: Summarize a news article.
Improved prompt:
"You are an analyst tasked with creating a synthesis for your very busy manager.
Summarize the news article below in 3 key points, in bullet point format.
Focus only on the most important information and key figures.'''{paste article here}'''"
By training with these exercises, you'll start to develop an intuition about how to formulate your requests to get the best possible results. It's this intuition, combined with structured knowledge of techniques, that makes a good prompt engineer.
Module 3: Advanced Prompting Techniques
After mastering the fundamentals, we'll now explore more sophisticated techniques. These methods allow you to unlock the true potential of LLMs, particularly for complex tasks that require reasoning, logic, or great precision. This module will give you the tools to transform average responses into exceptional results.
Lesson 3.1: Zero-Shot and Few-Shot Prompting
These two techniques are fundamental and describe the number of examples provided to the model in the prompt.
Zero-Shot Prompting: This is the simplest form of prompting. You ask the model to perform a task without providing any prior examples. This relies entirely on the knowledge and capabilities acquired by the model during its training. The exercises in module 2 were primarily examples of zero-shot prompting.
Few-Shot Prompting: This technique involves including a few examples (the "shots") in the prompt to show the model the type of response expected. It's a form of "in-context learning" where the model learns from these examples for the duration of the query. This is extremely powerful for tasks that are new to the model or that require a very specific output format.
Few-shot prompting can be used as a technique to enable in-context learning where we provide demonstrations in the prompt to steer the model to better performance. The demonstrations serve as conditioning for subsequent examples where we would like the model to generate a response. [3]
Example of Few-Shot Prompting (Sentiment Analysis):
Decide if the sentiment of the tweet is Positive, Negative, or Neutral.
Tweet: "I'm over the moon, I got a promotion!"
Sentiment: Positive
Tweet: "The traffic this morning was absolutely horrible."
Sentiment: Negative
Tweet: "I'm watching the football game."
Sentiment: Neutral
Tweet: "Wow, this new restaurant is incredible, the food is delicious!"
Sentiment:
By providing three clear examples, the model no longer has to guess what we mean by "sentiment" and can produce the response "Positive" with much greater confidence.
Lesson 3.2: Chain-of-Thought (CoT) Prompting
Chain-of-Thought (CoT) prompting is a major advancement that has considerably improved the ability of LLMs to solve problems requiring multi-step reasoning (mathematical problems, logic, common sense, etc.).
The central idea is simple: instead of directly asking for the final answer, we ask the model to break down its reasoning, to make explicit the steps that lead it to the conclusion. This decomposition forces the model to follow a logical process, which reduces careless errors and allows verification of the validity of the reasoning.
Introduced by Wei et al. (2022), chain-of-thought (CoT) prompting enables complex reasoning capabilities through intermediate reasoning steps. You can combine it with few-shot prompting to get better results on more complex tasks that require reasoning before responding. [4]
Example of CoT Prompting (Mathematical Problem):
Standard Prompt (incorrect):
- Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can contains 3 tennis balls. How many tennis balls does he have now?
- A: The answer is 11. (Incorrect)
Prompt with Chain-of-Thought (correct):
- Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can contains 3 tennis balls. How many tennis balls does he have now?
- A: Roger started with 5 balls. 2 cans of 3 tennis balls each is 6 tennis balls. 5 + 6 = 11. The answer is 11. (The reasoning is shown, and the answer is correct.)
An even simpler and often very effective variant is Zero-Shot CoT, which simply consists of adding the phrase "Let's think step by step" at the end of your question. [2]
Lesson 3.3: Other Advanced Techniques
Prompt engineering is a constantly evolving field. Here's an overview of other powerful techniques you can explore.
Technique | Description | Typical Use Case |
---|---|---|
Self-Consistency | Generate multiple responses with a chain of thought (by increasing the model's "temperature" for more diversity), then select the most frequent or most coherent response. | Improve response reliability for reasoning tasks. |
Generate Knowledge Prompting | Before answering the question, ask the model to generate some facts or knowledge on the subject. This "primes" the model with relevant information. | Questions on little-known subjects or requiring specific knowledge. |
Prompt Chaining | Break down a complex task into a series of simpler prompts, where the output of one prompt becomes the input of the next. | Automation of complex workflows (e.g., summarize an article, then extract key entities, then write a tweet). |
Tree of Thoughts (ToT) | The model explores multiple reasoning paths (tree branches) in parallel, evaluates their relevance, and chooses the best path. | Solving very complex problems where multiple strategies are possible. |
Retrieval-Augmented Generation (RAG) | Couple the LLM to an external database (e.g., a company document base). Before responding, the model searches for the most relevant information in this base and uses it to construct its response. | Creating specialized chatbots on proprietary knowledge, reducing "hallucinations." |
Mastering these advanced techniques will allow you to go from the status of occasional user to that of true architect of AI interaction.
Module 4: Tools and Platforms for the Prompt Engineer
A good craftsman must know their tools. For the prompt engineer, this means mastering the interfaces, platforms, and APIs that allow interaction with language models. This module presents the ecosystem of tools you'll use daily.
Lesson 4.1: Tool Overview
The prompt engineer's toolset can be classified into several categories, each responding to specific needs.
1. Playgrounds and Chat Interfaces:
These are the most direct entry points for interacting with LLMs. They're perfect for rapid experimentation, prompt prototyping, and learning.
- Examples: OpenAI's Playground, chat interfaces like ChatGPT, Google Gemini, Anthropic Claude.
- Usage: Quickly test prompt ideas, adjust model parameters (temperature, top_p, etc.), and get instant feedback.
2. Prompt Management and Orchestration Tools:
When prompts become more complex and integrate into applications, more structured tools are needed to manage, version, and chain them.
- Examples: Microsoft Prompt Flow, LangChain, LlamaIndex.
- Usage: Create prompt chains (Prompt Chaining), integrate external data sources (RAG), and build complete applications based on LLMs.
3. Practice and Evaluation Platforms:
These platforms are designed to sharpen your skills by offering challenges and allowing you to evaluate your prompt performance.
- Examples: Emio.io, competition platforms like Kaggle (for LLM-related tasks).
- Usage: Train on concrete cases, compare your approaches to others, and build a project portfolio.
Lesson 4.2: Using APIs
To integrate the power of LLMs into an application, website, or automated workflow, it's essential to go through an API (Application Programming Interface).
An API allows two computer programs to communicate with each other. In our case, your script (for example in Python) will send a request to the LLM provider's API (like OpenAI), containing your prompt and parameters. The API will process the request, submit it to the model, and return the generated response, which your script can then use.
Example of simple API call with Python (pseudo-code):
import openai
# API key configuration
openai.api_key = 'YOUR_API_KEY'
# Model call
response = openai.Completion.create(
engine="text-davinci-003",
prompt="Explain gravity as if you were talking to a 5-year-old child.",
max_tokens=50
)
# Display response
print(response.choices[0].text.strip())
Mastering APIs opens the door to automation and creating personalized tools, multiplying your efficiency as a prompt engineer.
Lesson 4.3: Practice and Continuing Education Platforms
Prompt engineering is a field that evolves at breakneck speed. Technological watch and continuing education are therefore absolutely essential. Fortunately, many high-quality resources, often free, are available.
Resource | Type | Description |
---|---|---|
Prompt Engineering Guide | Online guide | A very comprehensive textual resource, covering all basic and advanced techniques. Ideal as a reference. |
Learn Prompting | Online course | An open-source and community course, very well structured for progressive learning. |
DeepLearning.AI | Online course (MOOC) | Offers short and specialized courses, often created in partnership with AI labs themselves (e.g., "ChatGPT Prompt Engineering for Developers"). |
Online Communities | Forums, Discord, Reddit | Places to exchange questions, share discoveries, and stay up to date with the latest news (e.g., the r/PromptEngineering subreddit). |
Blogs and Research Publications | Articles, papers | For cutting-edge watch, it's useful to follow the blogs of major AI labs (OpenAI, Google AI, etc.) and consult new publications on sites like arXiv. |
By combining practical use of tools, programming via APIs, and rigorous continuing education, you'll equip yourself with all the skills necessary to excel in this field.
Module 5: Practical Situations and Case Studies
Theory and techniques are only useful if they're applied to concrete problems. This module is entirely dedicated to practice. Through three guided projects, you'll apply everything you've learned to build functional solutions and solve real challenges. Each project is designed to be completed with an LLM of your choice.
Project 1: Creating a Marketing Writing Assistant
Objective: Develop a series of prompts to assist a marketing team in creating content for the launch of a new product.
Context: You work for a startup launching "Zenith," a new AI-based task management mobile app that helps users prioritize their day based on their energy level. Your mission is to create prompts to generate different types of marketing content.
Task 1: Slogan Generation
- Challenge: Create a prompt that generates 5 short, punchy, and memorable slogans for the Zenith app.
- Prompt hints:
- Clearly define the role: "You are a genius copywriter..."
- Give context: App name (Zenith), target (busy professionals), value proposition (intelligent task prioritization).
- Specify output format: "List of 5 slogans, each on a new line."
Task 2: Blog Article Writing
- Challenge: Create a prompt that generates a detailed outline for a blog article titled "5 reasons why your to-do list doesn't work (and how to fix it with AI)."
- Prompt hints:
- Use the Chain-of-Thought technique: ask the model to first list common to-do list problems, then think about how Zenith solves each problem, and finally structure everything into an article outline.
- Ask for a structured output format (e.g., H1, H2, key points under each section).
Task 3: Social Media Post Creation
- Challenge: Create a prompt that generates 3 posts for Twitter and 2 for LinkedIn, announcing the app launch.
- Prompt hints:
- Use Few-Shot Prompting: give an example of a good tweet and a good LinkedIn post to guide the style.
- Specify each platform's constraints (length for Twitter, more professional tone for LinkedIn, inclusion of relevant hashtags).
Project 2: Customer Service Task Automation
Objective: Design a prompt workflow to sort and respond to incoming customer service emails.
Context: You're responsible for automation for an e-commerce company that sells electronic equipment. You want to use an LLM to pre-process customer emails.
Task 1: Email Classification
- Challenge: Create a prompt that classifies an incoming email into one of the following categories: [Product Question], [Delivery Problem], [Refund Request], [Other].
- Prompt hints:
- Provide clear examples for each category (Few-Shot).
- Ask the model to respond only with the category name, for easy integration into a script.
Task 2: Information Extraction
- Challenge: For emails classified as [Delivery Problem], create a prompt that extracts the order number and customer name.
- Prompt hints:
- Ask for output in JSON format for easy data manipulation:
{"customer_name": "...", "order_number": "..."}
. - Specify that if information is missing, the value should be "null".
- Ask for output in JSON format for easy data manipulation:
Task 3: Standard Response Generation
- Challenge: Create a prompt that writes a response email for a [Delivery Problem], using the extracted information.
- Prompt hints:
- Use variables in your prompt: "Write an email for {customer_name} regarding their order {order_number}..."
- Give the model a specific role: "You are a customer service agent, your tone should be empathetic and reassuring."
- Integrate a prompt chain: the output of the first two prompts serves as input for this one.
Project 3: Unstructured Data Analysis
Objective: Use an LLM to analyze a set of customer reviews and extract actionable information.
Context: You're a product manager for video editing software. You've collected 500 user reviews from an online platform and want to analyze them quickly.
Task 1: Granular Sentiment Analysis
- Challenge: Create a prompt that analyzes a review and identifies sentiment (positive, negative, neutral) for specific aspects of the software: [Interface], [Performance], [Features], [Price].
- Prompt hints:
- Ask for structured output (Markdown table or JSON).
- Use Chain-of-Thought: "Analyze each sentence of the review. For each sentence, identify if it talks about the interface, performance, features, or price. Then, determine the associated sentiment."
Task 2: Problem and Suggestion Synthesis
- Challenge: Create a prompt that goes through a set of negative reviews and synthesizes the 5 most frequently mentioned problems and the 5 most common improvement suggestions.
- Prompt hints:
- The task may require processing reviews in batches if context is limited.
- Ask the model to group similar themes (e.g., "slow export" and "rendering is long" should be grouped).
Task 3: Summary Report Generation
- Challenge: Create a final prompt that takes the results from previous tasks to generate a one-page report for management.
- Prompt hints:
- Structure the final prompt with all collected information: general sentiments, main strengths, 5 major problems, 5 key suggestions.
- Ask for a professional report format: title, executive summary, detailed sections, and conclusion.
Module 6: Becoming a Prompt Engineer: Career and Perspectives
Congratulations, you've reached the final module of this course. You now have a solid understanding of the principles, techniques, and tools of prompt engineering. This last part is dedicated to transforming these skills into a successful career and how to navigate this constantly evolving field.
Lesson 6.1: Building Your Portfolio
In a field as new and practical as prompt engineering, a solid portfolio is often worth more than a degree. It's tangible proof of your skills and your ability to achieve concrete results with LLMs. The projects you completed in module 5 are an excellent starting point.
What to include in your portfolio?
- Varied projects: Show that you can apply your skills to different domains (marketing, customer service, data analysis, etc.).
- Process demonstration: Don't just show the final prompt. Explain the problem, your approach, the different prompt iterations you tested, and why your final solution is effective. It's your thought process that has value.
- Quantifiable results: If possible, measure the impact of your prompts. For example: "This prompt reduced email processing time by 30%" or "improved click-through rate by 15%".
- Personal projects: Create a tool or application that solves one of your own problems using an LLM. This demonstrates your initiative and creativity.
Where to host your portfolio?
- A simple blog or personal website is an excellent starting point.
- Platforms like GitHub are ideal for sharing code and prompts in a structured way.
- Platforms like Medium or LinkedIn are also good places to publish detailed case studies.
Lesson 6.2: Professional Opportunities
The prompt engineer profession is exploding. Initially concentrated in tech companies, it's now spreading to all sectors.
Sectors that are hiring:
- Technology: Companies developing AI-based products (SaaS, mobile apps, etc.).
- Marketing and Advertising: For content generation, campaign personalization.
- E-commerce and Customer Service: For automation, chatbots, customer feedback analysis.
- Finance and Law: For document analysis, information research, report generation.
- Healthcare: For diagnostic assistance, medical record synthesis.
Preparing for interviews:
- Be ready to demonstrate your skills live: Expect practical tests where you'll be asked to solve a problem by creating a prompt.
- Talk about your projects: Your portfolio will be your best ally. Be able to explain each project in detail.
- Show your curiosity: Discuss the latest models, latest techniques, articles you've read. Show that you're passionate and actively keeping up with developments.
Lesson 6.3: The Future of Prompt Engineering
Prompt engineering is a young field and its future is being written. Several trends are emerging:
- Automation: Techniques like Auto-CoT or Meta Prompting aim to automate part of the prompt engineer's work. The role could evolve from "prompt writer" to "prompt system supervisor."
- Specialization: We'll probably see prompt engineers specialized by domain (healthcare, finance) or by model type (image models, code models) emerge.
- Integration into other professions: Prompt engineering may not always be a standalone profession, but an essential skill for many professionals (developers, marketers, analysts, etc.).
One certainty is that the ability to communicate effectively with intelligent machines will be an increasingly valuable skill. By mastering prompt engineering, you're not just learning a new profession, you're preparing for the future of work.
The key to long-term success will be your ability to learn, adapt, and stay curious. Continue to experiment, build, and share your knowledge.
General Conclusion
We've reached the end of this complete journey on prompt engineering. Starting from basic definitions, we've explored fundamental and advanced techniques, discovered ecosystem tools, put our knowledge into practice through concrete projects, and finally, projected ourselves into the career perspectives of this fascinating field.
You now hold a map and compass to navigate the fascinating world of large language models. The journey is just beginning. The next step belongs to you: practice, experiment, fail, learn, and finally, innovate. The future of human-machine interaction is being written, and you now have all the keys in hand to be one of its architects.
Complete References
[1] Salesforce. (n.d.). What is Prompt engineering: definition, applications and limits. Retrieved September 24, 2025, from https://www.salesforce.com/fr/resources/definition/prompt-engineering/
[2] Kojima, T., et al. (2022). Large Language Models are Zero-Shot Reasoners. arXiv. https://arxiv.org/abs/2205.11916
[3] Prompt Engineering Guide. (n.d.). Few-Shot Prompting. Retrieved September 24, 2025, from https://www.promptingguide.ai/techniques/fewshot
[4] Prompt Engineering Guide. (n.d.). Chain-of-Thought Prompting. Retrieved September 24, 2025, from https://www.promptingguide.ai/techniques/cot