Software Architect / Microsoft MVP (AI) and Technical Author

Bot Framework, C#, Chatbots

Integrating Bot Framework Composer and Adaptive Dialog to create dynamic, user-configurable chatbots

In my last post, I introduced Composer and how this web-based tool by the Bot Framework Team can be used to model and create chatbot conversational logic using a drag and drop interface.

In that post, I mentioned why you might want to use Composer, some of the benefits and explored a collection of the integrations you can use such as LUIS and the QnA Maker.

One key thing I mentioned in that post is what I believe is one of the biggest value-adds Composer brings:

if you’re a developer and find yourself with a collection of dialogues to implement, you can install Composer then let the Business Analyst model these in the designer canvas. As the Business Analyst creates the conversational logic, Composer will generate JSON under the hood.

A developer can then take the output JSON, and with a little bit of code, integrate these JSON files to generate conversations at run-time.  This is an approach that I’ve been using on for the past few weeks and it works well.

The developer is then freed up to work on more complex tasks such as training the natural language model (LUIS), perform further integrations or work on more complex tasks.

In this blog post, I’ll explain how you can do this integration of Composer with Adaptive Dialogues.

Specifically, we’ll:

  • Create a basic dialogue in Composer
  • Examine the underlying JSON Composer generates
  • Create a Bot Project in Visual Studio
  • Update the Bot Project to support the Adaptive Dialogue and Composer “plumbing”
  • See a Composer Generated Dialogue and conversation in Action

The code featured in this blog post is on my GitHub repo.

Tip: Another point worth mentioning is that “regular” dialogues written in code and Composer dialogues can co-exist in the same chatbot. You don’t have to opt for one or the other. This lets you create hybrid solutions and gives you further options as to how you want to develop conversational logic.

Why do this?

I’ve been using the Bot Framework since V3.  It’s changed quite a bit in that time. One thing that’s been consistent is that you need to create dialogues that model the conversational logic you want your bot to handle.

Often you need to crank these out in a language of your choice which involves building code and stitching the dialogues together. It’s core to chatbot development and if your chatbot has multiple branches of logic, it can take a little time to build these.

By leveraging Bot Framework Composer, coupled with Adaptive Dialogues, you can implement code that bridges the output of Composer Chatbots and Dialogues (JSON) with the power of the Bot Framework SDK Adaptive Dialogue technology.

Or in other words, you can write code that hydrates conversations dynamically at run-time from JSON.

This paves the way for near code-free changes if you need to update your conversational logic based on new requirements.

Benefits

There are other benefits to this.  The main benefits I see and have experienced to date are:

  • less technical users can now be involved in the chatbot development lifecycle
  • you can decouple dialogue development from other chatbot development activities
  • accelerate the development process
  • opens possibilities for letting dialogue maintenance be done by power users

A potential workflow for the business could be:

These are just some of the initial benefits. Let’s look at a walkthrough of the integration from start to finish.

Composer

Composer has a few components; here we’ll see how to install it and take a high-level look at the underlying JSON files that are created as you create a chatbot.

Installation

First, you need to download and install Composer from GitHub (total installation time 10-15 minutes).

Warning: You may need to install other things such as Yarn and Node.  Building and updating the packages can also take a while!

After everything has been downloaded and installed, you can run Composer by typing in the following to the command prompt that Composer is installed in:

yarn build

You’ll see something like this (I dig the ANSI GFX, they remind me of the days I used to run a BBS!):

After the TypeScript code has been compiled, you can type

yarn startall

Finally, you can browse to the Composer Homepage by pointing your browser to http://localhost/3000

Creating a Chatbot with Composer

For quickness, we’ll use one of the predefined examples that Composer provides out of the box: The Echo Bot.

After this is selected, the designer canvas is displayed.  You’ll see this chatbot is simple and sends back the users input:

We can test this out by clicking Start Bot.  The Bot Framework runtime will then compile this JSON and then give you the option to Test in Emulator:

After clicking this, the Bot Framework Emulator tool is loaded, and you can interact with the chatbot!

So, that’s a very quick and simple example of installing Composer and creating a basic chatbot.

Assets

Let’s look at the underlying JSON that Composer has generated to support this chatbot. You can find these in the default folder:

C:\Users\<USER>\Documents\Composer\EchoBot-0\ComposerDialogs\

Looking in this folder, we can see our EchoBot has 3 files:

  • Main.dialog
  • Main.lg
  • Main.lu

Main.dialog

This contains the main flow for our chatbot. From the code highlighted in bold, you can see the Intent that is triggered by our chatbot when we send it some text (Unknown). This means our bot only has one Intent defined (as we haven’t got any natural language understanding capability added).

{

  "$type": "Microsoft.AdaptiveDialog",

  "$designer": {

    "id": "433224",

    "description": "",

    "name": "EchoBot-0"

  },

  "autoEndDialog": true,

  "defaultResultProperty": "dialog.result",

  "triggers": [

    {

      "$type": "Microsoft.OnUnknownIntent",

      "$designer": {

        "id": "821845"

      },

      "actions": [

        {

          "$type": "Microsoft.SendActivity",

          "$designer": {

            "id": "003038"

          },

          "activity": "@{bfdactivity-003038()}"

        }

      ]

    },

    {

      "$type": "Microsoft.OnConversationUpdateActivity",

      "$designer": {

        "id": "376720"

      },

      "actions": [

        {

          "$type": "Microsoft.Foreach",

          "$designer": {

            "id": "518944",

            "name": "Loop: for each item"

          },

          "itemsProperty": "turn.Activity.membersAdded",

          "actions": [

            {

              "$type": "Microsoft.IfCondition",

              "$designer": {

                "id": "641773",

                "name": "Branch: if/else"

              },

              "condition": "string(dialog.foreach.value.id) != string(turn.Activity.Recipient.id)"

            }

          ]

        }

      ]

    }

  ],

  "generator": "Main.lg",

  "$schema": "https://raw.githubusercontent.com/microsoft/BotFramework-Composer/stable/Composer/packages/server/schemas/sdk.schema"

}

 

You’ll also see there is a SendAcitity with the code @{bfdactivity-003038()}. What the hell does that mean? This brings us onto Main.lg

Main.lg

This is the Language File that supports the human-readable texts the bot sends. It’s done this way to separate content from backend logic. This file only has one entry:

# bfdactivity-003038

- You said '@{turn.activity.text}'

The text @{turn.activity.text} is a placeholder and the value supplied by the user is injected at runtime by the Bot Framework. “turn” is a memory scoping.  You have different levels such as turn, dialog, and user. These can help you manage variables as you model your chatbots logic.

Those are the main components of that Composer has generated for this chatbot. Next, you need a project that can load this Composer JSON at runtime into an Adaptive Dialogue that can be served in a conversation.

Main.lu

As we aren’t using LUIS this file is empty, otherwise, it would contain definitions for Intents, Entities and so on.

Chatbot Project : Composer and Adaptive Dialogue Plumbing

Finally, we get to the nuts and bolts of the integration.  This is a regular chatbot project running on .NET Core.  Before we load our Composer JSON / Dialogues, we need to extend our chatbot project to support Adaptive DialogueLanguage Generation and setup a Resource Explorer.

What follows are the main amendments.

Startup.cs

A few changes to this class are needed.  First, you need to create a Bot Framework Adapter.

Next, add a reference to Resource Explorer. This lets our bot know where the JSON files can be loaded from:

AdapterWithErrorHandler.cs

We need to extend this class that handles errors, to include support for:

  • Resource Explorer
  • Adaptive Dialogues
  • Language Generation

That’s all the plumbing implemented.  The final things we need to do are to copy the Composer generated dialogues (JSON files) into this project and update the RootDialog.cs class to load them using our new integration code.

Composer Generated files (

.DIALOG, .LG and .LI)

Lift and shift the .dialog.lg and .li files that Composer has generated into the Dialogs folder of the chatbot project:

These are the files that were mentioned earlier (Main.dialog, Main.lg, and Main.lu).

RootDialog.cs

With the files copied over, the next step is to update RootDialog.cs. This is the initial dialogue that gets called by the chatbot when it loads for the first time.  RootDialog.cs needs to be updated to point to the Composer JSON files.  These JSON files will be used by Adaptive Dialogue to hydrate a conversation at runtime!

Here you can see the source for the Root Dialogue. Comments are added for clarity:

 public class RootDialog : ComponentDialog
    {
        protected readonly Bot.Builder.BotState _userState;
        
        public RootDialog(UserState userState) : base("root")
        {
            _userState = userState;
       
            // Get Folder of dialogs.
            var resourceExplorer = new ResourceExplorer().AddFolder(@"C:\Users\Jamie\source\repos\adaptive-and-composer\Dialogs\");

            // find the main composer dialog to start with
            var composerDialog = resourceExplorer.GetResource("Main.dialog");
            
            // hyrdate an Adaptive Dialogue
            AdaptiveDialog myComposerDialog = DeclarativeTypeLoader.Load<AdaptiveDialog>(composerDialog, resourceExplorer, DebugSupport.SourceMap);
            myComposerDialog.Id = "Main.dialog";
            
            // setup lanaguage generation for the dialogue
            myComposerDialog.Generator = new TemplateEngineLanguageGenerator(new TemplateEngine().AddFile(@"C:\Users\Jamie\source\repos\adaptive-and-composer\Dialogs\ComposerDialogs\Main\Main.lg"));

            // add to the ComponentDialog which Root dialogue inherits from
            AddDialog(myComposerDialog);

            // create a waterfall dialogue and begin our adaptive dialogue
            AddDialog(new WaterfallDialog("waterfall", new WaterfallStep[] { BeginComposerAdaptiveDialog }));

        }

        private async Task<DialogTurnResult> BeginComposerAdaptiveDialog(WaterfallStepContext stepContext, CancellationToken cancellationToken)
        {
            await _userState.SaveChangesAsync(stepContext.Context, false, cancellationToken);

            return await stepContext.BeginDialogAsync("Main.dialog", null, cancellationToken);
        }
    }

This above code contains the bare minimum that’s required to support the dynamic creation of conversational logic using Composer generated assets.

Tip: Another nice feature is that you can also share state between Adaptive and Non-Adaptive Dialogues. Imagine a scenario where you need to share data between dialogues your Business Analysts are creating, and dialogues being coded by your Development Team.  Being able to cross share state lets both Teams and use the same data/values as part of their conversational logic.  Spoiler Alert – this will be the subject of a future blog post!

Testing our Example

Finally, we can test our example, to recap, our bot simply echoes back the user input. It was modeled in Composer like this:

By integrating support for Adaptive and Composer in the root dialogue, when we start the bot and send some text to the bot,  it sends back our initial input:

We can make a little tweak the logic in Composer.  For example, add an IF THEN ELSE branch of logic:

We then recopy the Composer JSON files into our bot project and run the emulator again:

The new logic has been applied! It only took a few seconds to change the logic using Composer.

Let’s keep it real though, it wouldn’t take a developer long to code that change either.

But what this shows, is that less technical users can now build conversational logic with little to no developer involvement as the technology and integration lower the barrier to entry in terms of the development of chatbot conversational logic.

Imagine you’re a developer and have been tasked with building the Language Understanding model, implementing QnA Maker and additional user stories, etc. The business can now get less technical users involved in the chatbot development lifecycle, thereby freeing up developers to work on other problems and helping accelerate the development process

Summary

In this blog post, we’ve seen how you can integrate Bot Framework Composer with Bot Framework SDK Adaptive Dialog’s and how this can be used to hydrate conversations from JSON files dynamically at runtime.

We’ve seen how to make changes using the designer canvas in Composer and have those take immediate effect in the chatbot with no changes to the chatbots’ initial C# code.

We’ve also outlined an end to end chatbot development process that shows you how to involve less technical users during the chatbot development lifecycle

A lot of technologies have coalesced to bring this to life with the Bot Framework SDK underpinning it all. Pretty slick stuff.

Finally, massive thanks to the Microsoft Bot Framework Team for their support in helping get through the teething problems with earlier releases of Adaptive and Composer.

You can find the code accompanying code for this blog post here.

JOIN MY EXCLUSIVE EMAIL LIST
Get the latest content and code from the blog posts!
I respect your privacy. No spam. Ever.

3 Comments

  1. Julian Hoets

    Hello there,
    I have migrated the code below to an Bot Application using Core 3.1 and the various Bot libraries at version 4.9.4.
    I cannot resolve the two mentioned references no matter what packages/libraries I include.

    DeclarativeTypeLoader
    TemplateEngineLanguageGenerator

    Have their names changed or have they been left out of the new packages.
    If this code is redundant could you point me in the direction of a good/simple example of:
    integrating Composer / Adaptive dialogs and cards into a waterfall implementation.

    Code:
    var resourceExplorer = new ResourceExplorer().AddFolder(@”C:\Users\Jamie\source\repos\adaptive-and-composer\Dialogs\”);

    // find the main composer dialog to start with
    var composerDialog = resourceExplorer.GetResource(“Main.dialog”);

    // hyrdate an Adaptive Dialogue
    AdaptiveDialog myComposerDialog = DeclarativeTypeLoader
    .Load(composerDialog, resourceExplorer, DebugSupport.SourceMap);
    myComposerDialog.Id = “Main.dialog”;

    // setup lanaguage generation for the dialogue
    myComposerDialog.Generator = new TemplateEngineLanguageGenerator(new TemplateEngine().AddFile(@”C:\Users\Jamie\source\repos\adaptive-and-composer\Dialogs\ComposerDialogs\Main\Main.lg”));

    // add to the ComponentDialog which Root dialogue inherits from
    AddDialog(myComposerDialog);

    • keshav

      I’m having the same issue. Does anyone know what DeclarativeTypeLoader is called in the latest version?

  2. Fabricio

    If i want copy this integration using nodejs. Can you think that is possible?

Leave a Reply