Working with YAML Prompt Templates in Semantic Kernel

In my previous post, I discussed adding prompts using config.json and skprompt.txt files. This approach allows better management of prompts by separating their definitions from the application code. Today I'll focus on another solution: prompt templates based on YAML files. These templates enable storing prompts in dedicated files, significantly improving code readability, modularity, and overall project management. In this article, I'll explain how to use this approach in practice, the benefits it brings, and why it's worth implementing.

Before starting, make sure you have the required packages installed: Microsoft.SemanticKernel and Microsoft.SemanticKernel.Yaml. Now, let’s walk through an example of a console application in .NET that demonstrates how to use these tools in practice.

Below is a sample C# code snippet that demonstrates how to load a prompt from a YAML file and use it to create a semantic function in Semantic Kernel:

using Microsoft.Extensions.Configuration;
using Microsoft.SemanticKernel;

var builder = new ConfigurationBuilder()
    .AddUserSecrets<Program>();

var configuration = builder.Build();
var kernel = Kernel.CreateBuilder()
    .AddOpenAIChatCompletion("gpt-4o-mini", configuration["apiKey"]!)
    .Build();

var path = Path.Combine(Directory.GetCurrentDirectory(), "..","..","..", "Prompts", "Translate.yaml");
var promptYaml  = File.ReadAllText(path);
var translateFunc = kernel.CreateFunctionFromPromptYaml(promptYaml);

var result = await kernel.InvokeAsync(translateFunc, new KernelArguments
{
    ["source_language"] = "english",
    ["target_language"] = "polish",
    ["text_to_translate"] = "Work as hard as possible; it increases the chances of success. If others work 40 hours a week, and you work 100, you'll achieve in 4 months what would take others a year."
});

Console.WriteLine(result);
Console.ReadLine();

This code first configures access to the API key using ConfigurationBuilder to read sensitive data from the app configuration via AddUserSecrets<Program>().

Next, an instance of the Semantic Kernel is created using Kernel.CreateBuilder(). Then, we call the AddOpenAIChatCompletion() method to enable communication with the OpenAI model using the relevant API key.

After that, we read the prompt contents from the Translate.yaml file using File.ReadAllText(). This prompt is then used to create a translation function (translateFunc) with CreateFunctionFromPromptYaml(). We then create a KernelArguments object containing the input values, such as the source language, target language, and the text to be translated.

Finally, the translation function is called via kernel.InvokeAsync(), and the translation result is displayed in the console. With this approach, we can separate prompt logic from the application code, significantly improving project modularity and management.

YAML File

In the example above, we use a Translate.yaml file that defines the content of the prompt and the translation parameters. Here’s a sample YAML file:

name: Translate
template: |
  Translate the following text from {{$source_language}} to {{$target_language}}: "{{$text_to_translate}}"
description: Translate text.
input_variables:
  - name: source_language
    description: Language from which the text will be translated.
    is_required: true
  - name: target_language
    description: Language into which the text will be translated.
    is_required: true
  - name: text_to_translate
    description: The text that needs to be translated.
    is_required: true
output_variable:
  description: Translated sentence.
execution_settings:
  default:
    max_tokens: 1000
    temperature: 0.5

The YAML file starts with the name section, which defines the function's name. Next is the template, the actual prompt template used to generate queries for the AI. This template includes placeholders for variables like the source language, target language, and text to be translated, making it easy to tailor the command to specific needs.

description is a brief description of the function, which helps convey what the function does to the AI service. This is particularly useful when managing multiple prompts, as this description helps quickly identify their purpose.

In the input_variables section, we define input variables that adjust the prompt template to specific use cases. Each variable is described with three parameters: name, indicating the variable's name; description, explaining its purpose; and is_required, indicating whether the variable is mandatory. For instance, source_language specifies the language from which the text is translated, while target_language defines the target language. The final variable, text_to_translate, is the text we want to translate.

output_variable describes the output generated by the AI service, in this case, the translated text. This clarity about what constitutes the function's output is especially useful when working with more complex templates and various output data types.

Finally, we have the execution_settings section, which specifies the technical details of prompt execution. max_tokens defines the maximum length of the response that the AI model can generate, while temperature affects the response's creativity – a higher temperature value produces more varied and less predictable answers.

Benefits of Using YAML Prompt Templates

Modularity and Readability: Describing prompts in separate files allows for a clear separation between application logic and data, improving code readability and organization. YAML enables better management of more complex prompts, bringing order where inline code might become difficult to maintain.

Easy Updates: Prompt changes can be made centrally in one place. This approach ensures that any update automatically applies wherever the prompt is used, eliminating the need for modifications across multiple code locations.

Reusability: YAML files can be reused across different parts of a project and even in other projects, reducing code duplication, minimizing error risks, and ensuring greater consistency.

Enhanced Collaboration: Storing prompts in files makes it easy for the entire team to share them in a repository, ensuring uniform standards and facilitating teamwork. YAML files are simple to edit, which supports collaboration between developers and other team members, even those with minimal programming experience.

Conclusion

Storing prompts in YAML files is an effective way to streamline and increase the modularity of projects using Semantic Kernel. If you haven’t tried this approach yet, it's worth exploring to simplify prompt management and reduce code complexity.

See you in future posts!