Software Architect / Microsoft MVP (AI) and Technical Author

AI, C#, Prompt Engineering, Semantic Kernel

Semantic Kernel: Working with Inline Prompt Functions

In the previous blog post, I introduced Semantic Kernel, the components that form the open source SDK, and how it can be used in conjunction to with Open AI to create a simple conversational agent.

In this blog post, I take a closer look at how you can leverage prompts to create Inline Prompt Functions when using Semantic Kernel.

The following is covered:

  • Inline prompt functions
  • Ensuring predictability
  • Refining LLM prompt output
  • Few-shot prompting
  • Handling unexpected user input

 

Templatising and calling loading prompt functions from the file system are not included.  These capabilities will be covered in a future blog post.

~

What Is a Prompt?

A “prompt” is the initial input given to a language model to produce a specific response.

This input can be a simple query or a complex directive, to guide the AI’s output.  Crafting effective prompts is essential for optimizing AI performance and ensuring accurate, contextually relevant responses.

A common task for people that interact with your AI solution is to infer the human’s intent.

Historically, this might have been implemented using a technology such as LUIS (Language Understanding Intelligence Service) or CLU (Conversation Language Understanding).

Implementing these often required manually training a custom language model.

You can now combine semantic kernel with LLM / SLMs and prompting to perform intent recognition automatically without the need to train a custom language model. The results are good.

When all is said and done, I am not convinced that Prompt Engineering alone is sole career path.   It is another tool in the developer’s toolbox.

~

Inline Prompt Function: A Simple Example

There are several ways you can implement an inline prompt in Semantic Kernel.  The most basic is to construct a string and send it to the AI model.

So, give the most common task of inferring the humans intent, we can write the following:

static async Task Main(string[] args)
{
   Console.WriteLine("Launching Semantic Kernel Sandpit");

   var builder = Kernel.CreateBuilder()
             .AddOpenAIChatCompletion(modelId, apiKey);

   Kernel kernel = builder.Build();

   // Get chat completion service
   var chatCompletionService =    kernel.GetRequiredService<IChatCompletionService>();


     // Start the conversation
     Console.Write("What is your request > ");

     string request = Console.ReadLine()!;


     // prompt to send to AI
     string prompt = $"What is the intent of this request? {request}";

     // Print the results
     Console.WriteLine(await kernel.InvokePromptAsync(prompt));
 }

 

This returns the following:

In the past, using LUIS, I would have had to create a new Intent called BookFlight.

Next, I would have had to create multiple utterances to help train the language model to better understand this type of natural language.

Connecting Semantic Kernel to an LLM means you no longer need to do this.

As great as this is, this simple example isn’t very practical and would still be hard to codify business logic around the LLM response.  A better way is required.

~

Ensuring Predictability

Generative AI is powerful but also unpredictable.  You might need a way to ensure the language model only handles a discrete range of actions.

For example, a typical set of choices for a customer service AI solution may involve:

  • get next available booking date
  • make a booking
  • report a fault

 

To implement these options in a prompt and ensure better predictability from LLM reply, you can implement something like the following:

Console.Write("What is your request > ");

string request = Console.ReadLine()!;

string prompt = @$"What is the intent of this request? {request}
You can choose between GetNextBookingDate, MakeBooking, ReportAFault.";

// Print the results
Console.WriteLine(await kernel.InvokePromptAsync(prompt));

 

This returns the following:

Interestingly, the gpt-3.5-turbo model failed to identify the correct intent when I tested this. I had to use gpt-4.

~

Refining LLM Prompt Output

Directing the LLM to work with a discrete set of choices gives you a deterministic development experience but it can still be difficult to parse the exact response or codify a solution around it.

There is another way.

You can further refine the output by adding some structure to the prompt by using some additional formatting.

For example:

Console.Write("What is your request > ");

string request = Console.ReadLine()!;

string prompt = @$"Instructions: What is the intent of this request?
             Choices: GetNextBookingDate, MakeBooking, FileAComplaint.
             User Input: {request}
             Intent: ";

// Print the results
Console.WriteLine(await kernel.InvokePromptAsync(prompt));

 

Returns the following:

Here we can see the relevant choice option (MakeBooking) has been inferred by the AI model.

This is highly useful.

A concrete description of the selected human intent makes it much easier to codify a solution and take the relevant action.

This might involve making an API call or asking the user a follow up question to collect further information.

~

Give Your AI a Hand by Implementing Few-Shot Prompting

The earlier examples have been implemented using what is known as zero-shot prompting.

This is fine in simple scenarios where the question/responses are straightforward.

You might need more nuance due to similarities in sentence style or prompts a user may supply, but that must be treated differently.

For example, consider the following user prompt:  can you tell me how to make a booking for the next available date?

It’s possible the LLM may select either GetNextBookingData or MakeBooking for the above example.

You can give the AI model a slight steer by providing some examples of a phrase for each.

You can see an example of this here:

Console.Write("What is your request > ");

string request = Console.ReadLine()!;

string prompt = @$"Instructions: What is the intent of this request?
          Choices: GetNextBookingDate, MakeBooking, FileAComplaint.
          User Input: Can you tell me how to find the next booking date?
          Intent: GetNextBookingDate
          User Input: Can you tell me how to make a booking?
          Intent: MakeBooking
          User Input: {request}
          Intent: ";

Console.WriteLine(await kernel.InvokePromptAsync(prompt));

 

Running this code and supplying the prompt can you tell me how to make a booking for the next available date? returns the intent MakeBooking:

Using few-shot prompting improves the accuracy of your LLM predictions.

~

Handling the Unexpected

There are times when your LLM will not behave as expected due to insufficient training data, examples, or system prompts.  You can handle these situations by telling the LLM how to behave and what corrective action to take.

This can be accomplished by adding instructions.

You can see an example of this here:

Console.Write("What is your request > ");

string request = Console.ReadLine()!;

string prompt = @$"
        Instructions: What is the intent of this request?
If you don't know the intent, don't guess; instead respond with ""Unknown"".
        Choices: GetNextBookingDate, MakeBooking, FileAComplaint.

        User Input: Can you tell me how to find the next booking date?
        Intent: GetNextBookingDate

        User Input: Can you tell me how to make a booking?
        Intent: MakeBooking

        User Input: Can you tell me how to file a complaint?
        Intent: FileAComplaint

        User Input: {request}
        Intent: ";

 Console.WriteLine(await kernel.InvokePromptAsync(prompt));

 

Supplying the prompt tell me the time returns the following:

Here, we can see the Unknown intent is returned (as per our instructions).  You can then take corrective action or run additional logic.

~

Video Demo

The following 10 minute YouTube video shows each of the Inline Prompt Functions in action:

Summary

In this blog post, we’ve explored some of the options you have available to you when implementing prompts using Semantic Kernel.

In the next blog post, we will take a closer look at creating, configuring, and loading a prompt function from the file system.

These are known as File-based Prompt Functions.

We’ll see how File-based Prompt Functions can be used to further extend the kernel behaviour.

~

Further Reading and Resources

You can learn more about Semantic Kernel and Prompt Engineering here:

 

Enjoy what you’ve read, have questions about this content, or would like to see another topic? Drop me a note below.

You can also schedule a call using my Calendly link to discuss consulting and development services.

JOIN MY EXCLUSIVE EMAIL LIST
Get the latest content and code from the blog posts!
I respect your privacy. No spam. Ever.

Leave a Reply