Software Architect / Microsoft MVP (AI) and Technical Author

AI, Audio Notes, Azure, Azure AI Services, C#, Machine Learning, Text Analytics

Audio Notes: Using Azure AI Language to Perform Document Summarization

In an earlier blog post, I introduced Audio Notes.


This is a new a SaaS experiment that uses artificial intelligence, speech to text, and text analytics to automatically summarise audio and create concise notes from your recordings.


In the first part of this series, I showed how Azure AI Speech is used to perform real-time, continuous transcription of speech to text within the web browser.


In this blog post, you will see how transcribed text is summarized using Azure AI Language.


What is Azure AI Language?

Azure AI Language is a service that encapsulates multiple Natural Language Processing (NLP) capabilities that help the machine understanding and analyse text.

Previously, these were known as Azure Cognitive Service Text Analytics, QnA Maker, and LUIS.


What Is Document and Conversation Summarisation?

I’m interested in in the document summarisation feature that is available.  An option to summarise conversations is also available.


Document summarization uses NLP to generate a summary for documents.  There are 2 available options for this:

  • extractive
  • abstractive



Extractive summarisation extracts sentences that collectively represent the most important or relevant information within the original content.  This option will be used for Audio Notes. It will let us surface the pertinent information in transcribed speech.



Abstractive summarisation generates a summary with concise, coherent sentences or words which are not simply extract sentences from the original document. Use this when you need to shorten content that could be considered too long to read.


You can find an entire list of the available capabilities that Azure AI Language offers here.



Consuming Azure AI Language Services

Like other Azure AI Services, you there are a few ways to consume Azure AI Language Services.  For document summarisation, you can use Language Studio, REST API, or the .NET Client Library SDK.


To integrate documentation summarisation with Audio Notes, we’ll be using the client library with C#.


You can find the client library here.



Creating the Service

You create the service directly in the Azure Portal. You can see this here:


After a few moments, the service is created. We take a note of the API endpoint:


And the API key:


Example Code

Document summarisation is applied using the following code:

    public async Task<ExtractiveSummarizeResult?> SummarizeSpeechTranscript(string document)
            var batchInput = new List<string>




            TextAnalyticsActions actions = new TextAnalyticsActions()

                ExtractiveSummarizeActions = new List<ExtractiveSummarizeAction>() { new ExtractiveSummarizeAction() }


            // Start analysis process.
            AnalyzeActionsOperation operation = await _client.StartAnalyzeActionsAsync(batchInput, actions);
            await operation.WaitForCompletionAsync();

            // View status.
            Console.WriteLine($"AnalyzeActions operation has completed");

            // View results.
            await foreach (AnalyzeActionsResult documentsInPage in operation.Value)
                IReadOnlyCollection<ExtractiveSummarizeActionResult> summaryResults = documentsInPage.ExtractiveSummarizeResults;

                foreach (ExtractiveSummarizeActionResult summaryActionResults in summaryResults)
                    if (summaryActionResults.HasError)

                        Console.WriteLine($"  Error!");

                        Console.WriteLine($"  Action error code: {summaryActionResults.Error.ErrorCode}.");

                        Console.WriteLine($"  Message: {summaryActionResults.Error.Message}");



                    foreach (ExtractiveSummarizeResult documentResults in summaryActionResults.DocumentsResults)


                        if (documentResults.HasError)


                            Console.WriteLine($"  Error!");

                            Console.WriteLine($"  Document error code: {documentResults.Error.ErrorCode}.");

                            Console.WriteLine($"  Message: {documentResults.Error.Message}");



                        Console.WriteLine($"  Extracted the following {documentResults.Sentences.Count} sentence(s):");


                        foreach (ExtractiveSummarySentence sentence in documentResults.Sentences)


                            Console.WriteLine($"  Sentence: {sentence.Text}");



                        return documentResults;




            return null;



The main object we’re interested in is ExtractiveSummarizeResult.

This contains a list of summarise sentences from the parameter (document) that was passed in.

Within the context of Audio Notes, this parameter will contain the transcribed text that was captured used Azure Speech services / speech to text.

This code will be encapsulated within a service class and made available from a web API / controller method.


Note: The above code was adapted from a Microsoft Learn resource.    Find this here.  Some of the object names in the MS Learn resource didn’t match the available objects in the SDK. I’ve contacted the product team to let them know.  By the time this blog is published, the SDK may have changed or the MS Learn resource may have been modified and will be ok.



In this demo, I use in the following as a document from my Microsoft MVP profile:

Jamie Maguire is a Software Architect, Developer, Microsoft MVP (AI), Technical Author, and SaaS Founder. He is a lifelong tech enthusiast with over 20 years professional experience.


Jamie is passionate about using AI technologies to help advance systems in a wide range of organisations.

He has collaborated on many projects including working with Twitter, National Geographic, and other academic institutions. Jamie is a keen contributor to the technology community and has gained global recognition for articles he has written and software he has built.

He is a STEM Ambassador and Code Club volunteer, inspiring interest at grassroots level. Jamie shares his story and expertise at speaking events, on social media and through podcast interviews.

He has co-authored a book with 16 fellow MVPs demonstrating how Microsoft AI can be used in the real world and regularly publishes material to encourage and promote the use of AI and .NET technologies.

He designed, built, and released the social media SaaS and online journal / mood tracking tool


The debugger in VS is used to inspect the summarised content and the main 3 items from the profile are identified:

You can see document summarisation in action here:


You can also check it out over on YouTube here (with other videos).



In this blog, you’ve learned about Azure AI Language Services. You’ve seen how this can be used to perform document summarisation.


This will be used to help build a prototype for .NET web application SaaS Audio Notes.


In a future blog post, you’ll see how we bring together Azure AI Speech and Azure AI Language for Audio Notes.

You’ll see how these will complete an end to end process that includes:

  • capturing speech in real-time
  • performing speech transcription
  • summarising this to generate concise notes


Stay tuned.

Get the latest content and code from the blog posts!
I respect your privacy. No spam. Ever.

Leave a Reply