In my last post, I showed how you can use Bot Framework Composer, coupled with Adaptive Dialog, to create dynamic, user-configurable conversational logic – and all using a drag and drop interface with minimal code.
In that post, I showed how the JSON that Composer generates can be used to hydrate at run-time conversational logic by using Adaptive Dialogs. I also ran through an example that showed this in action and discussed some of the benefits I found whilst building my most recent chatbot.
I shared the following tip:
“ Another point worth mentioning is that “regular” dialogues written in code and Composer dialogues can co-exist in the same chatbot. You don’t have to opt for one or the other. This lets you create hybrid solutions and gives you further options as to how you want to develop conversational logic”
In this blog post, we’ll see how you can do this, i.e. mix and match “regular” Bot Framework Dialogs with Composer-Generated Adaptive Dialogs.
Specifically, we’ll create a Bot Project in Visual Studio that contains:
- a regular dialogue that asks you for the users’ name
- a dialogue created with Composer that follows on from this and uses input from the “regular” dialogue.
We’ll see a demo of this in action with all code being available for you on GitHub to check out.
Note – this uses DLL’s from the Bot Framework that is currently in “Preview Mode”.
Why do this? What are the Benefits?
There are a few reasons why you might want to do this. The main ones that immediately come to mind are augmenting your chatbots functionality and configuration management.
Bot Functionality
You’re already be invested in chatbot development and have created multiple dialogues. It’s unlikely you want to rewrite them, but you want to offload the development to less technical users whilst being able to increase your existing chatbots functionality at the same time.
With a hybrid approach, you can add extension points into existing “regular” dialogue which break out into an Adaptive Dialogue created with Composer.
This would let you augment your chatbots functionality with little to no code changes.
Configuration Management
If you have a greenfield project, you can architect your chatbot to contain extension points at standardised areas in each dialogue.
For example, your development team can work on the nuts and bolts of your chatbots “core” dialogues that users can’t update. These dialogues can form your white-label bot.
Each extensions point can be added at standard places in the white-label bots dialogues such as the Begin / End sections of each core dialogue. These extension points could break out to its own Adaptive Dialogue. When the Adaptive Dialogue completes, the conversation could be resumed and carry on as normal.
Whether or not each Adaptive Dialog does something is upto you, but by baking this in from the outset, you have introduced an element of configuration management that can be passed onto end users (with the help of Composer).
These are just some of the initial benefits. Let’s look at the conversational logic we’ll build that will show how a “regular” dialogue can interact with a Composer / Adaptive Dialog.
Overview
The chatbot will consist of one main (or root) dialogue called RootDialog.cs. Two dialogues will be invoked from this root level dialogue:
- UserProfileDialog
- Main.dialog
UserProfileDialog.cs
This is a “regular” dialogue and will simply ask for the user’s name. It will then greet the user:
This will be the first dialog that gets invoked when RootDialog is initiated.
Main.dialog
This is the Adaptive Dialog written in Composer. This will be the second dialog to run that picks up where UserProfileDialog left off. For convenience, we’re using the Composer / Adaptive Dialog from the last blog post. To recap, here is the flow of this dialog:
Here, you can see the chatbot will take the text from user input in the previous “regular” dialogue (turn.activity.text) and echo it back to the human. If the user supplies “boo” for their name, the bot will say “you scared me”, else, another message is sent back to the user. Simple stuff, but you get the idea!
UserProfileDialog.cs (Regular Dialog)
Here we can see the listing for this dialog:
public UserProfileDialog(UserState userState) : base(nameof(UserProfileDialog)) { _userStateAccessor = userState.CreateProperty<UserProfile>("UserProfile"); // This array defines how the Waterfall will execute. var waterfallSteps = new WaterfallStep[] { GetName, Greet }; // Add named dialogs to the DialogSet. // These names are saved in the dialog state. AddDialog(new WaterfallDialog(nameof(WaterfallDialog), waterfallSteps)); AddDialog(new TextPrompt(nameof(TextPrompt))); // The initial child Dialog to run. InitialDialogId = nameof(WaterfallDialog); } private async Task<DialogTurnResult> GetName(WaterfallStepContext stepContext, CancellationToken cancellationToken) { var promptOptions = new PromptOptions { Prompt = MessageFactory.Text("Please enter your name.") }; return await stepContext.PromptAsync(nameof(TextPrompt), promptOptions, cancellationToken); } private async Task<DialogTurnResult> Greet(WaterfallStepContext stepContext, CancellationToken cancellationToken) { await stepContext.Context.SendActivityAsync("Pleased to meet you "+ stepContext.Result.ToString()); return await stepContext.ContinueDialogAsync(cancellationToken); }
Main.dialog (Composer / Adaptive Dialog)
We already covered the flow for this dialogue in this blog post, for completeness however, here is the underlying JSON that is used to hydrate an Adaptive Dialogue at run-time:
{ "$type": "Microsoft.AdaptiveDialog", "$designer": { "id": "433224", "name": "EchoBot-0" }, "autoEndDialog": true, "defaultResultProperty": "dialog.result", "triggers": [ { "$type": "Microsoft.OnUnknownIntent", "$designer": { "id": "821845" }, "actions": [ { "$type": "Microsoft.SendActivity", "$designer": { "id": "003038" }, "activity": "@{bfdactivity-003038()}" }, { "$type": "Microsoft.IfCondition", "$designer": { "id": "069505", "name": "Branch: if/else" }, "condition": "turn.activity.text == 'boo'", "actions": [ { "$type": "Microsoft.SendActivity", "$designer": { "id": "889027", "name": "Send a response" }, "activity": "@{bfdactivity-889027()}" } ], "elseActions": [ { "$type": "Microsoft.SendActivity", "$designer": { "id": "414547", "name": "Send a response" }, "activity": "@{bfdactivity-414547()}" } ] } ] }, { "$type": "Microsoft.OnConversationUpdateActivity", "$designer": { "id": "376720" }, "actions": [ { "$type": "Microsoft.Foreach", "$designer": { "id": "518944", "name": "Loop: for each item" }, "itemsProperty": "turn.Activity.membersAdded", "actions": [ { "$type": "Microsoft.IfCondition", "$designer": { "id": "641773", "name": "Branch: if/else" }, "condition": "string(dialog.foreach.value.id) != string(turn.Activity.Recipient.id)" } ] } ] } ], "generator": "Main.lg", "$schema": "https://raw.githubusercontent.com/microsoft/BotFramework-Composer/stable/Composer/packages/server/schemas/sdk.schema" }
RootDialog.cs
Finally, we get to the nuts and bolts of this integration between a regular dialog and adaptive dialog created using Composer! Here you can see the entire code listing for RootDialog:
public class RootDialog : ComponentDialog { protected readonly Microsoft.Bot.Builder.BotState _userState; public RootDialog(UserState userState) : base("root") { _userState = userState; AddDialog(new UserProfileDialog(userState)); // The initial child Dialog to run. InitialDialogId = "waterfall"; // Get Folder of dialogs. var resourceExplorer = new ResourceExplorer().AddFolder("Dialogs"); // find the main composer dialog to start with var composerDialog = resourceExplorer.GetResource("Main.dialog"); // hydrate an Adaptive Dialogue AdaptiveDialog myComposerDialog = DeclarativeTypeLoader.Load<AdaptiveDialog>(composerDialog, resourceExplorer, DebugSupport.SourceMap); myComposerDialog.Id = "Main.dialog"; // setup lanaguage generation for the dialogue myComposerDialog.Generator = new TemplateEngineLanguageGenerator(new TemplateEngine().AddFile(@"Dialogs\ComposerDialogs\Main\Main.lg")); // add to the ComponentDialog which Root dialogue inherits from AddDialog(myComposerDialog); AddDialog(new WaterfallDialog("waterfall", new WaterfallStep[] { StartDialogAsync, BeginComposerAdaptiveDialog })); } private async Task<DialogTurnResult> StartDialogAsync(WaterfallStepContext stepContext, CancellationToken cancellationToken) { // Start the child dialog. This will run the top slot dialog than will complete when all the properties are gathered. return await stepContext.BeginDialogAsync("UserProfileDialog", null, cancellationToken); } private async Task<DialogTurnResult> BeginComposerAdaptiveDialog(WaterfallStepContext stepContext, CancellationToken cancellationToken) { await _userState.SaveChangesAsync(stepContext.Context, false, cancellationToken); return await stepContext.BeginDialogAsync("Main.dialog", null, cancellationToken); } }
The key bits to pick out are as follows:
This above code contains the bare minimum that’s required to support the integration. There are a few steps:
Step 1: Add the UserProfileDialog to the dialogs we want to make available in RootDialog.
Step 2: Hydrate an Adaptive Dialog from the Composer generated assets (Main.dialog / main.lg). We also add this to the dialogs that we want to make available in the RootDialog
Step 3: Create a Waterfall Dialog to house both components, calling two steps:
- StartDialogAsync
- BeginComposerAdaptiveDialog
Let’s take a closer look at StartDialogAsync and BeginComposerAdaptiveDialog. (Spoiler alter – there’s not much code!)
StartDialogAsync
In this step, we kick off the Regular Dialog UserProfileDialog:
private async Task<DialogTurnResult> StartDialogAsync( WaterfallStepContext stepContext, CancellationToken cancellationToken) { return await stepContext.BeginDialogAsync("UserProfileDialog", null, cancellationToken); }
BeginComposerAdaptiveDialog
Again, nothing too much going on, we simply being our Adaptive Dialogue Main.dialog.
private async Task<DialogTurnResult> BeginComposerAdaptiveDialog( WaterfallStepContext stepContext, CancellationToken cancellationToken) { await _userState.SaveChangesAsync(stepContext.Context, false, cancellationToken); return await stepContext.BeginDialogAsync("Main.dialog", null, cancellationToken); }
Testing our Example
With everything in place, we can give our chatbot a test. We’ll use the Bot Framework Emulator to do this.
To recap, if the user enters a name, the user input will be sent back to them and they will be greeted by the bot. If they enter “boo” the bot will send a message advising it got a fright. For reference, here is the flow again (for the Composer / Adaptive logic):
Here, we can see this in action:
Next, we can supply “boo” and the other branch of logic is invoked:
This isn’t the most complex of chatbots but what we’ve seen is how you can blend regular Bot Framework Dialogs with Adaptive Dialogs.
You can use both technologies to help you build extra configuration into your existing chatbots, it may even make you rethink your approach to chatbot architecture.
For example, you might want to offer your core chatbots offerings as part of one Bot Framework Skill which can’t be changed by a user.
You might then offer user-configurable conversational logic in another Skill. This user-configurable Skill may have some core features that shouldn’t be changed by the user. The solution we’ve just gone through can help you do this.
Summary
In this blog post, we’ve seen how you can blend Bot Framework Dialogs with Composer Generated Adaptive Dialogs.
We explored some potential uses for blending both sets of technology and the benefits it can bring to your chatbot development.
Finally, we ran through a demo of this in action. In this demo, we saw how the regular dialog started a conversation, then how the Composer generated Adaptive Dialog was able to follow on from the regular dialog.
I hope you’ve found this useful!
You can find the code accompanying code for this sample here.
Kushal
Hi Jamie, excellent post again.
In fully composer made bots, the interruptions and cross training and publishing of LUIS apps is handled by the composer, (I understand it also creates additional interruption intents under the hood).
In case of composer + regular dialogues, how can we address interruptions and LUIS modelling?
jamie_maguire
Hi Kushal,
I’m glad you found this post useful.
I’m afraid I’m unclear what you’re asking.
Do you want to handle events across dialogs? If so, you can emit custom events with simple and complex data types. Thse can then be trapped by parent dialogs or even bubbled up the stack.
Hope this helps.
Eric
Hi Jamie,
Over a year later, this seems to be the only reference around showing how to combine composer and “regular” dialogs. I find this surprising, since it seems like something that a lot of people would want to do.
I tried upgrading the NuGet packages to the latest version for the bot builder libraries used (4.12.2), and at least 2 references don’t resolve any more. Do you know if Microsoft has dropped support for combining dialogs, or if there’s a different way to make this work with the current libraries?
Thanks,
Eric
jamie_maguire
Hi Eric,
Thanks for checking out my blog.
It’s definitely a popular pattern and support is still available for combining dialogs. The namespacing has changed a little however since this post.
I would need to write a new blog post to reflect the new namespacing changing that Microsoft applied.
Cheers.
Andreas
Hi Jamie
Is there any update on your post, because Microsoft is changing the namespaces und packages. I cannot find the class “DeclarativeTypeLoader” in any package. Does Microsoft dropped support for combining dialogs?
Regards
Andreas
Rustam
Hello Jamie
I’ve met the same issue as Andreas.
Could you please help find this class DeclarativeTypeLoader
Piotr
Hi Jamie,
You are incorporating the composer dialog into you C# project do you know is it possible to do the opposite way? I would like to open Composer project and add there C# waterfall dialog but I don’t really know where to start. Any tips?