Azure Functions

Introduction to Serverless

You start getting billed for cloud services as soon as you spin them up. You get billed even if you do not use the services. Also, you need to plan and configure the scaling strategy for these services. Some services give you the flexibility to set autoscaling, and for others, you need to set the scaling configuration manually. In either case, you end up providing the necessary settings so that the services can scale. In the serverless cloud services case, you get billed when the service is running and is executing your hosted code, and you do not get billed when the service is idle and is not executing anything. You pay the cloud vendor on an actual consumption basis, which saves you money. The underlying platform manages all the scaling aspects of your application running inside the serverless service. You need not configure any scaling settings for the serverless service. The serverless services are intelligent enough to add new instances to handle incoming traffic and remove the additional instances when the incoming traffic decreases.

Serverless does not mean that the cloud services are not hosted on any server. You cannot run any code without a server. In the case of serverless services, you do not have control over the server hosting your code. You need to bring your code and host it on the serverless services without worrying about the underlying infrastructure. The cloud vendor manages the underlying infrastructure. The following are a few of the popular serverless offerings provided by Microsoft Azure:

  • Azure Functions
  • Azure Logic Apps
  • Azure Event Grid
  • Serverless Azure Kubernetes Service
  • Serverless SQL Database

Note: In the case of serverless services and platform as a service (PaaS), you can get your code and host it on the service without managing the underlying infrastructure. The cloud vendor manages the infrastructure. However, you need to manage the scaling aspects in the case of PaaS. The cloud vendor manages the scaling for the serverless service. In the case of PaaS, you get billed as soon as you spin up the service. However, in a serverless service, you get billed when the service is active and executes your code.

Azure WebJobs vs. Azure Functions

You create a WebJobs job in an App Service Plan. A web job works as a background worker for your applications hosted on Azure App Service. For example, you can host an application that facilitates users to upload files in Azure Blob Storage. Usually, these files will be in a user-specific format. Before the application processes the files, the files should be transformed into a standard format that the application can understand. In such scenarios, you can create a web job in the same App Service Plan. This web job will run as a background worker, pick up the user-uploaded file, and transform it into a format that the application can understand. Web jobs can get triggered using a wide variety of triggers such as Azure Queue Storage, Cosmos DB, Azure Blob Storage, Azure Service Bus, Azure Event Hub, and many more. Azure WebJobs meets all the necessary developer needs for background processing. However, it shares the same App Service Plan as Azure App Service. Sharing the same App Service Plan means sharing the same underlying computing infrastructure. This sharing of the underlying infrastructure leads to performance bottlenecks at times.

Functions are not just meant to process background tasks. They can host business logic for applications as well. However, they are well suited to host code that runs for a short time interval. The functions are serverless offerings and scale independently. The underlying infrastructure manages all the scaling aspects for the function. Web jobs are tied to the Azure App Service instances and scale as and when the Azure App Service instance scales. You need to set scaling configurations explicitly for each web job. Functions can run as and when triggered using consumption-based plans, or they can run continuously using a Dedicated Plan. Web jobs are always tied to the App Service Plan that is a dedicated hosting plan. However, you are not charged separately for web jobs. They come with the App Service Plan. The Azure portal provides a browser-based editor that you can use to build, test, and deploy functions inside the Azure portal. This feature enhances the productivity of the developer. You can integrate Azure Functions with Azure Logic Apps with ease and build enterprise-grade solutions on Azure. Azure Functions supports various triggers such as HTTP WebHooks (GitHub/Slack) and Azure Event Grid that Azure WebJobs does not support.



Azure Functions


Function as a Service (FaaS) is getting more popular every day on all the major cloud platforms. With FaaS, you can build small chunks of code that run for a short time and host them on the FaaS cloud offering. You get billed for the time your function runs, and you do not need to bother about the hosting infrastructure and the scaling aspects.

To develop Azure functions, all we must do is write code that responds to the event we’re interested in, whether that’s an HTTP request or a queue message. All the code we will write connects these events to the code that handles them and is abstracted away from us (handled by the framework).


As we’ll see shortly, this provides a very lightweight development platform that is particularly suited for a fast style of development where you just focus on the code that meets your business requirements, eliminating a lot of overhead.

Another great benefit of Azure Functions compared to Virtual Machines or Web Applications is that there is no need to have at least one dedicated server running constantly. Instead, you only pay when your function runs. If your function is listening on a queue and no messages ever arrive, you won’t have to pay anything.

Azure Functions will automatically scale the number of servers running your functions to meet demand. If there’s no demand, then there might be no servers running your code. However, the framework can spin one up very quickly when required.

How Azure Functions can help us

Let’s see how a serverless architecture and Azure Functions could help us. What we can do is break up some or most of the parts of our monolithic architecture into smaller bits.

Let’s say the webhook for new purchases is going to be handled by an Azure function that posts a message into a queue, and then that message is handled by another function that generates the license file and puts it into a blob.

That blob then creates triggers for another Azure function that emails the license to the customer. As you might have guessed, the reporting webhook is another Azure function that writes a row into Azure Table storage.

The license validation API is another Azure function that performs a database lookup, as well as the nightly process that produces a report.

We still have our web server, which could even be replaced with some static content hosted in blob storage. However, what we’ve managed to do is decompose our monolithic architecture into a set of loosely coupled functions, which you could even think of as nanoservices.

These functions could be wrapped up nicely into a single Azure Functions app, given that they belong together and share configuration settings and local resources.














Typical use cases

Now that we’ve seen how using Azure Functions can simplify an app’s architecture, what sort of applications are Azure Functions and serverless good for?

Azure Functions is not necessarily right for every scenario, but it makes a lot of sense and adds value in specific cases, such as:

Experimentation and rapid prototyping: Since it only takes a few minutes to get up and running using Azure Functions, it is possible to have a back end for a fully functioning prototype in no time.

Automating development processes: Using Azure Functions is a very handy way of automating some of your internal development processes. A good example is that many software teams use Slack for communication, and it might be convenient to integrate it with your build server so that notifications can be sent to team members when a build occurs in case of errors or exceptions. Given that Slack has a rich ecosystem and extensibility model, all you need is a webhook that it can call to integrate it with Azure Functions.

Decomposing and extending applications: Just like we saw previously, Azure Functions is ideal for simplifying monolithic systems.

Independent scaling: Azure Functions is great if you have a number of queue handlers that all need different scaling requirements. If you are managing all the servers yourself, then this can suddenly turn into a headache. However, if you’re using Azure Functions, you simply hand that problem over to the framework and let it solve the problem for you.

Integrating systems: One of the most exciting and useful features of Azure Functions is the ability to integrate various systems. Sometimes you might need to create an intermediate adapter that connects two systems together—for instance, Twitter and Dropbox. Using Azure Functions is a great way to do that.

Going serverless: Finally, it’s all about not having to worry about managing servers and infrastructure, so using a series of loosely coupled Azure Functions that communicate together is a great way to simplify a back end.


When should we avoid Azure functions?

Azure Functions are a powerful tool for building serverless applications, but there are certain scenarios where they might not be the best choice:

  1. Long-Running Functions: Azure functions are designed to be short-lived. Long-running functions can cause unexpected time-out issues.
  2. Cold Start: In the consumption plan, functions may have a cold start if they haven’t been used for a while, which can lead to latency.
  3. Stateless Design: Functions are stateless, so if your application requires maintaining state, you might need to consider other options.
  4. Cross-Function Communication: If your application requires complex communication between functions, you might need to use Durable Functions or Logic Apps.
  5. Testing and Debugging: Testing and debugging Azure Functions can be challenging due to their event-driven nature.
  6. Security: Functions should have only the essential privileges required for their task to avoid potential security risks.

 

Azure Storage Explorer

One very useful tool when working with Azure Functions is the Microsoft Azure Storage desktop application, which is available for free.


When we created our Azure Functions application, a new storage account was automatically deployed as well.



Here are some key features and roles of Azure Storage Explorer in Azure Functions development:

Manage Storage Resources: You can upload, download, and manage Azure Storage blobs, files, queues, and tables, as well as Azure Data Lake Storage entities and Azure managed disks.

Versatility: It allows you to manage your cloud storage accounts in multiple subscriptions across all Azure regions, Azure Stack, and Azure Government.

Extensibility: You can add new features and capabilities with extensions to meet even more of your cloud storage management needs.

Security: You can securely access your data using Microsoft Entra ID (formerly Azure Active Directory) and fine-tuned access control list (ACL) permissions. 

Function App Development: Azure Functions requires an Azure Storage account when you create a function app instance. Azure Storage Explorer lets you work with these storage accounts, which are used to maintain binding states, function keys, and store function app code.

Local Emulation: Azure Storage Explorer lets you work disconnected from the cloud or offline with local emulators like Azurite. This flexibility helps boost your productivity and efficiency while reducing costs.

Testing and Debugging: Azure Storage Explorer can be used to preview data directly, saving time and simplifying your workflow.


What Are Triggers and Bindings?

Azure functions are serverless components. They remain in an idle state whenever they are not doing any work. You need to invoke Azure functions so that they can wake up and execute the hosted code. Triggers define how the functions run. You can invoke Azure functions using triggers, and they provide all the necessary input data or the input payload for the function. Your Azure functions need to send or receive data from other resources, such as the Queue Storage, Blob Storage, RabbitMQ, and many more, to Azure Functions. Bindings enable functions to interact with other services declaratively without needing to write any code.

Triggers define how the functions execute. They wake up functions from their idle state and make them execute. Functions can be invoked from a wide range of services. These services invoke functions using triggers and pass on the input data as a payload to the functions. You can configure a single trigger for an Azure function.

Azure functions need to interact with other services such as Blob Storage, Cosmos DB, Kafka, and more to achieve business functionality. You can use bindings to facilitate data exchange between these services and Azure Functions. Functions can send data to these services or get data from these services as needed.

You do not need to write any code to implement triggers and bindings. You need to build declarative configurations to enable triggers and bindings and facilitate interaction with Azure Functions and other services. This functionality saves much programming effort for you. Otherwise, you would have to write a lot of code and handle complexities to facilitate these interactions. If you are creating a C# class library for an Azure function using the Visual Studio IDE or Visual Studio Code, you can decorate your function method with attributes to enable triggers and bindings. If you are using the Azure portal to create functions, you can modify the function.json file and add all the necessary configurations to enable triggers and bindings.

The following is an example of function.json that adds a Blob trigger to the Azure function created using the Azure portal. This configuration enables a Blob trigger for the function. The Azure function can accept binary data as input from the Azure Blob.


Triggers are unidirectional. Azure functions can receive data from triggers but cannot send back any data to the triggering service. Bindings are bidirectional. Functions can send data to a configured service or receive data from a configured service. The following are the available directions that you define for the bindings:

  1. in
  2. out
  3. inout

The function gets triggered whenever a message gets added in the Azure Service Bus Queue. Alternatively, you can use a Storage Account Queue instead of the Service Bus Queue. The message in the Service Bus Queue is passed to the Azure function as a trigger payload. Azure Queue Storage and Azure Cosmos DB are configured as bindings. Azure Cosmos DB supports bindings in both directions. Functions can send and receive data from Azure Cosmos DB. The Azure Service Bus Queue supports output binding. Azure Functions can send data to functions. Azure Functions processes the payload message and passes on the processed output to Azure Queue Storage and Azure Cosmos DB. It can also get data from Azure Cosmos DB if needed.




Supported Triggers and Bindings

Triggers and bindings are crucial for Azure Functions. Actual business scenarios will need an Azure function to exchange data with other services. Azure Functions supports a wide range of triggers and bindings. The supported triggers and bindings depend on the runtime version of Azure Functions. If none of the supported bindings matches your requirements, you can create your custom binding using .NET and use it anywhere per your needs.

 Here’s a list of some of the most commonly used ones:

  1. HTTP Trigger: This trigger gets fired when an HTTP request is received.
  2. Timer Trigger: This trigger is called on a predefined schedule.
  3. Blob Trigger: This trigger will get fired when a new or updated blob is detected.
  4. Queue Trigger: This trigger gets fired when a new message arrives in an Azure Storage Queue.
  5. Event Hub Trigger: This trigger will get fired when any events are delivered to an Azure Event Hub.
  6. Service Bus Trigger: This trigger is fired when a new message arrives from a service bus queue or topic.
  7. GitHub Webhook: This trigger is fired when an event occurs in your GitHub repositories.
  8. Generic Webhook: This trigger gets fired when the Webhook HTTP requests come from any service that supports Webhooks.

  

Example:

The host.json file in Azure Functions is a metadata file that contains global configuration options affecting all functions in a function app instance. Here are some key points about host.json:

  1. Global Configuration: The settings in host.json apply to all functions within the function app.
  2. Versioning: The structure and properties of host.json can vary depending on the version of the Azure Functions runtime.
  3. Bindings Configuration: Configurations related to bindings are applied equally to each function in the function app.
  4. Environment Settings: You can override or apply settings per environment using application settings.
  5. Logging: The host.json file configuration determines how much logging a functions app sends to Application Insights.

Here’s a sample host.json file for version 2.x+ with all possible options specified:

{
  "version": "2.0",
  "aggregator": {
    "batchSize": 1000,
    "flushTimeout": "00:00:30"
  },
  "concurrency": {
    "dynamicConcurrencyEnabled": true,
    "snapshotPersistenceEnabled": true
  },
  "extensions": {
    "blobs": {},
    "cosmosDb": {},
    "durableTask": {},
    "eventHubs": {},
    "http": {},
    "queues": {},
    "sendGrid": {},
    "serviceBus": {}
  },
  "extensionBundle": {
    "id": "Microsoft.Azure.Functions.ExtensionBundle",
    "version": "[1.*, 2.0.0)"
  },
  "functions": ["QueueProcessor", "GitHubWebHook"],
  "functionTimeout": "00:05:00",
  "healthMonitor": {
    "enabled": true,
    "healthCheckInterval": "00:00:10",
    "healthCheckWindow": "00:02:00",
    "healthCheckThreshold": 6,
    "counterThreshold": 0.80
  },
  "logging": {
    "fileLoggingMode": "debugOnly",
    "logLevel": {
      "Function.MyFunction": "Information",
      "default": "None"
    },
    "applicationInsights": {
      "samplingSettings": {
        "isEnabled": true,
        "maxTelemetryItemsPerSecond" : 20,
        "evaluationInterval": "01:00:00",
        "initialSamplingPercentage": 100.0,
        "samplingPercentageIncreaseTimeout" : "00:00:01",
        "samplingPercentageDecreaseTimeout" : "00:00:01",
        "minSamplingPercentage": 0.1,
        "maxSamplingPercentage": 100.0,
        "movingAverageRatio": 1.0,
        "excludedTypes" : "Dependency;Event",
        "includedTypes" : "PageView;Trace"
      },
      "dependencyTrackingOptions": {
        "enableSqlCommandTextInstrumentation": true
      },
      "enableLiveMetrics": true,
      "enableDependencyTracking": true,
      "enablePerformanceCountersCollection": true,
      "httpAutoCollectionOptions": {
        "enableHttpTriggerExtendedInfoCollection": true,
        "enableW3CDistributedTracing": true,
        "enableResponseHeaderInjection": true
      },
      "snapshotConfiguration": {
        "agentEndpoint": null,
        "captureSnapshotMemoryWeight": 0.5,
        "failedRequestLimit": 3,
        "handleUntrackedExceptions": true,
        "isEnabled": true,
        "isEnabledInDeveloperMode": false,
        "isEnabledWhenProfiling": true,
        "isExceptionSnappointsEnabled": false,
        "isLowPrioritySnapshotUploader": true,
        "maximumCollectionPlanSize": 50,
        "maximumSnapshotsRequired": 3,
        "problemCounterResetInterval": "24:00:00",
        "provideAnonymousTelemetry": true,
        "reconnectInterval": "00:15:00",
        "shadowCopyFolder": null,
        "shareUploaderProcess": true,
        "snapshotInLowPriorityThread": true,
        "snapshotsPerDayLimit": 30,
        "snapshotsPerTenMinutesLimit": 1,
        "tempFolder": null,
        "thresholdForSnapshotting": 1,
        "uploaderProxy": null
      }
    }
  },
  "managedDependency": {
    "enabled": true
  },
  "singleton": {
    "lockPeriod": "00:00:15",
    "listenerLockPeriod": "00:01:00",
    "listenerLockRecoveryPollingInterval": "00:01:00",
    "lockAcquisitionTimeout": "00:01:00",
    "lockAcquisitionPollingInterval": "00:00:03"
  },
  "watchDirectories": ["Shared", "Test"],
  "watchFiles": ["myFile.txt"]
}

 

local.settings.json

The local.settings.json file in Azure Functions is used for local development and testing. It stores app settings and settings used by local development tools. Here are some key points about local.settings.json:

  1. Local Development: The settings in local.settings.json are used only when you’re running your project locally.
  2. App Settings: When you publish your project to Azure, you should also add any required settings to the app settings for the function app.
  3. Fetch Settings: To fetch settings from Azure, you can use the func azure functionapp fetch-app-settings command. This command downloads the settings from your Azure Function App and stores them in local.settings.json.
  4. AzureWebJobsStorage: The following setting in the Values collection of the local.settings.json file tells the local Functions host to use Azurite for the default AzureWebJobsStorage connection: "AzureWebJobsStorage": "UseDevelopmentStorage=true".

 {

  "IsEncrypted": false,

  "Values": {

    "AzureWebJobsStorage": "UseDevelopmentStorage=true",

    "FUNCTIONS_WORKER_RUNTIME": "dotnet",

    "MyCustomSetting": "TestValue"

  }

}


 Azure Functions extensibility

The Azure Functions SDK is built on top of the Azure WebJob SDK, and it inherits the extensibility framework from that.

Even if you have 20 triggers and bindings provided by Microsoft, you can develop your own triggers or bindings leveraging the power of extensibility.

Understanding how the runtime and its binding process work is essential for using the extensibility model of the Azure functions. The binding process is the way the Azure Functions runtime creates and manages all the objects related to triggers and bindings.

In particular, the Azure Functions runtime has two different phases for the binding process:

Startup binding: This phase occurs when the host of the functions starts. During this phase, the runtime registers all the extensions you add using packages and allows you to register your extensions. An extension is a particular class that contains your triggers and bindings definitions.

Runtime binding: This phase occurs every time a trigger starts a function. During this phase, the runtime, using the binding classes of each trigger and binding involved in the process, creates the payloads and runs the function code.

An extension is a class that implements the IExtensionConfigProvider interface of the WebJob SDK. It is also called the binding provider, and it is registered in the runtime during the startup phase. It is responsible for creating all the infrastructural classes used by the runtime to manage the binding class for triggers and bindings.

The startup binding phase is composed of several steps executed by the runtime each time it starts to host your functions:

 
  1. The runtime analyzes your code, finding all the methods you decorated with the attribute FunctionName.
  2. For each method found in step 2 (potentially an Azure function), the runtime tries to resolve the dependency for each argument of the method using the binding classes defined in every single extension registered during step 1.
  3. When all the Azure functions found in steps 2 and 3 are processed, the runtime creates an internal representation for each function. The runtime will use those representations during the runtime phase to execute the function quickly. In this step, the runtime excludes every function not resolved in step 3.
  4. For each trigger used in each function, the runtime creates and executes the corresponding listener.

The runtime binding phase is composed of two steps executed every time an Azure function is triggered:

  1. The runtime retrieves the internal representation of the function (created in step 4 of the startup phase), and for each binding in the method signature, creates the binding objects the runtime uses to generates the actual classes injected in the execution.
  2. If the runtime can generate all the binding objects correctly, then it executes the function passing them.

Azure Durable Functions

You may have a scenario where the application logic is broken into smaller chunks, and each chunk of code is hosted in an Azure function. The application consists of a couple of Azure functions that interact with each other and exchange data and state for business processing. You may have to execute the functions in a specific order like a workflow. You need to orchestrate these Azure functions and make sure that the functions maintain their data and state. Azure functions are by default stateless. They will not be able to handle such scenarios. You need to use the service Azure Durable Functions, which will help you to make these functions stateful and build a workflow. You learned how to create intelligent serverless applications using Azure Cognitive Services and Azure Functions in the previous chapter. In this chapter, you will learn how to build and orchestrate stateful workflows using Azure Durable Functions.


First, you cannot call a function directly from another function. Of course, you can make an HTTP call or use a queue, but you cannot call the other function directly, and this limitation makes your solution more complex.

The second (and more critical) limitation is that you cannot create a stateful function, which means you cannot store data in the function. Of course, you can use external services like storage or CosmosDB to store your data, but you have to implement the persistence layer in your function, and again, this situation makes your solution more complex.

For example, if you want to create a workflow using Azure Functions, you have to store the workflow status and interact with other functions.

A workflow is a set of stages, and each of them is typically a function call. You have to store the stage you reach and call other functions directly. You can implement a workflow using Azure Functions, but you need to find a solution to store (using an external service and some bindings) the workflow status and call the functions that compose your workflow.

Durable Functions is an Azure Functions extension that implements triggers and bindings that abstract and manage state persistence. Using Durable Functions, you can easily create stateful objects entirely managed by the extension.








All the components in the Durable Functions world are Azure functions, but they use specific triggers and bindings provided by the extension. Every component has its responsibility:

Client: The client starts or sends commands to an orchestrator. It is an Azure function triggered by a built-in (or custom) trigger as a standard Azure function. It uses a custom binding (DurableClient, provided by the Durable Functions extension) to interact with the orchestrator.

Orchestrator: The orchestrator implements your workflow. Using code, it describes what actions you want to execute and the order of them. An orchestrator is triggered by a particular trigger provided by the extension (OrchestrationTrigger). The OrchestrationTrigger gives you a context of execution you must use to call activities or other orchestrators. The purpose of the orchestrator is only to orchestrate activities or other orchestrators (sub-orchestration). It doesn’t have to perform any calculation or I/O operations, but only governs activities’ flow.

Activity: The basic unit of work of your workflow, the activity is an Azure function triggered by an orchestrator using the ActivityTrigger provided by the extension. The activity trigger gives you the capability to receive data from the orchestrator. Inside the activity, you can use all the bindings you need to interact with external systems.



How Durable functions manage state: function chaining

The Durable Functions extension is based on the Durable Task Framework. The Durable Task Framework is an open-source library provided by Microsoft that helps you implement longrunning orchestration tasks. It abstracts the workflow state persistence and manages the orchestration restart and replay.

The Durable Task Framework uses Azure Storage as the default persistence store for the workflow state, but you can use different stores using extended modules. At the moment, you can use Service Bus, SQL Server, or Netherite. To configure a different persistence store, you need to change the host.json file.

We will refer to the default persistence layer (Azure Storage) during the paragraph's continuation, but what we will say is almost the same for any other persistence layer. Durable functions use queues, tables, and blobs inside the storage account you configure in your function app settings for the following purposes:

Queues: The persistence layer uses the queue to schedule orchestrators and activities’ executions. For example, every time the platform needs to invoke an activity from an orchestrator, it creates a specific message in one of the queues. The ActivityTrigger reacts to that message and starts the right activity.

Tables: Durable functions use storage tables to store the status of each orchestrator (instances table) and all their execution events (history table). Every time an orchestrator does something (such as call an activity or receive a result from an activity), the platform writes one or more events in the history table. This event sourcing approach allows the platform to reconstruct the actual status of each orchestrator during every restart.

Blobs: Durable functions use the blobs when the platform needs to save data with a size greater than the limitations of queues and tables (regarding the maximum size of a single entity or a single message).


The Durable Functions platform uses JSON as standard, but you can customize the serialization by implementing your custom serialization component.

Queues, tables, and blobs are grouped in a logical container called the task hub. You can configure the name of the task hub for each function app by modifying the host.json file. For an example:

{
"version": "2.0",
"extensions": {
"durableTask": {
30
"hubName": "ADFSHub"
}
}
}






As you can see, the name you configure in the hubName property of the host.json file becomes the prefix of all the queues, tables, and blobs used by the platform.


Sure, here's an example of a client function, an orchestrator function, and an activity function in Azure Durable Functions:

Client Function (StartOrchestrator):

[FunctionName("StartOrchestrator")]
public static async Task<HttpResponseMessage> StartOrchestrator(
    [HttpTrigger(AuthorizationLevel.Function, "post", Route = null)] HttpRequestMessage req,
    [DurableClient] IDurableOrchestrationClient starter,
    ILogger log)
{
    // Function input comes from the request content.
    string instanceId = await starter.StartNewAsync("E1_HelloSequence", null);

    log.LogInformation($"Started orchestration with ID = '{instanceId}'.");

    return starter.CreateCheckStatusResponse(req, instanceId);
}

Orchestrator Function (E1_HelloSequence):

[FunctionName("E1_HelloSequence")]
public static async Task<List<string>> Run(
    [OrchestrationTrigger] IDurableOrchestrationContext context)
{
    var outputs = new List<string>();

    // Replace "E1_SayHello" with the name of your activity function
    outputs.Add(await context.CallActivityAsync<string>("E1_SayHello", "Tokyo"));
    outputs.Add(await context.CallActivityAsync<string>("E1_SayHello", "Seattle"));
    outputs.Add(await context.CallActivityAsync<string>("E1_SayHello_DirectInput", "London"));

    // returns ["Hello Tokyo!", "Hello Seattle!", "Hello London!"]
    return outputs;
}

Activity Function (E1_SayHello):

[FunctionName("E1_SayHello")]
public static string SayHello([ActivityTrigger] string name, ILogger log)
{
    log.LogInformation($"Saying hello to {name}.");
    return $"Hello {name}!";
}

In this example, the StartOrchestrator function is an HTTP-triggered function that starts the E1_HelloSequence orchestrator function. The DurableClient attribute is used to get a reference to the IDurableOrchestrationClient interface, which is used to start the orchestrator.

The orchestrator function E1_HelloSequence calls the activity function E1_SayHello three times with different inputs. The results are added to the outputs list and returned.

The E1_SayHello activity function takes a string input and returns a greeting message.


The client is an Azure function with an HTTP trigger and uses the DurableClient attribute to declare the binding that allows you to manage the orchestrator.

The IDurableOrchestrationClient instance provided by the Durable Functions platform is the payload class for the new binding. StartNewAsync allows you to start a new orchestration, giving the name of the orchestration function and a custom object (in our sample, the order deserialized from the request) and returns the instance ID of the orchestrator.


The OrchestrationTrigger trigger starts the function and manages the restart every time an activity, called by the orchestrator, finishes. You can also use the trigger to retrieve the object passed by the client (using the GetInput method).

The orchestrator must implement only the workflow flow, and must not implement calculation, I/O operation, or similar. The code you write in an orchestrator must be deterministic. This means you cannot use values generated inside the orchestrator (like a random value), or values can change every time the orchestrator runs (for example, a value related to local time). Remember that the platform restarts the orchestrator every time an activity finishes and rebuilds its state using the events stored in the history table mentioned before. The state-building phase must be deterministic; otherwise, you receive an error.


Orchestrator event-sourcing pattern

We previously discussed function chaining, one of the simplest workflow patterns we can implement using Durable Functions. The question is: how does the platform execute every single instance of the orchestrator, reconstructing precisely the state it had reached?

Durable functions are Azure functions, and therefore, cannot have a long execution time. The Durable Functions platform runs the orchestrator for the time necessary to perform the steps required to invoke one of the activities. It stops (literally) and restarts when the activity completes its job. At that point, the platform must reconstruct the state the orchestrator had reached to resume the execution with the remaining activities.

The platform manages the state of every single orchestrator using the event-sourcing pattern. Every time the orchestrator needs to start an activity, the platform stops its execution and writes a set of rows in a storage table. Each row contains what the orchestrator does before starting the activity (events sourcing).


Stateful Patterns with Durable Functions

The Durable Functions platform uses the underlying storage to manage each orchestrator's current state and drive the orchestration between the activities.

Fan-out/fan-in

In the fan-out/fan-in pattern, you must execute several activity functions in parallel and wait for all functions to finish (because, for example, you have to aggregate the results received from them in a single value).





Function Chaining

In this pattern, several functions execute one after the other. The first function processes the data and sends the data for further processing to the second function. The second function processes the data further and sends the data to the third function and so on. In this pattern, we chain a set of functions, with each function in the chain performing business logic for the scenario and passing on the data and state to the next function.






Async HTTP APIs

In some scenarios, you may have a long-running activity processing the business functionality. You need to keep tracking the execution status of the long-running activity and get the processed results once the activity completes. You can build such scenarios using async HTTP APIs. A client application will trigger an HTTP-triggered orchestrator client. The HTTP orchestrator client will invoke the Orchestrator function to orchestrate the Activity functions that are executing long-running tasks. The Azure Durable Functions workflow exposes a set of REST APIs that give the processing status and results of the workflows. The client application can invoke these REST APIs to monitor the completion status of the long-running tasks and get the processed results.




Monitoring

You may have scenarios where you need to monitor events or the execution status of an external process or another function. You can use long-running durable functions to continuously check the events or execution status of the external process and perform an activity when a specified condition is met. For example, you may have an Azure function that gets triggered whenever an item gets inserted in the Queue Storage. You need to generate a notification whenever the Azure function goes down or generates an exception. You can have a long-running durable function executing continuously and monitoring the exceptions generated by the Azure function or monitoring the health of the Azure function. Whenever the Azure function generates an exception or goes down, the durable function will send a notification.


Human Interaction

You may have a maker-checker scenario where a maker creates a request, and the request gets forwarded to the checker for verification and approval. For example, a loan approval system can be developed using the Azure Durable Functions workflow. A customer invokes the Orchestrator client function of the Azure Durable Functions workflow. The Orchestrator client function invokes the Orchestrator function and starts the loan approval process. The Orchestrator function calls a special type of Activity function called a Durable Timer function and sends an email to the approver for loan approval. The Durable Timer function waits for a specified amount of time and notifies whenever the approver approves or rejects the loan using a user interface application. The user interface application notifies the Durable Timer function with the approval status. The Durable Timer function completes once it gets a notification and passes on the status to the Orchestrator function for further processing. The durable function waits for a specified time interval and returns the control to the Orchestrator function if the approver does not take any action within that time interval.


Aggregator

In this pattern, the durable function aggregates event data from multiple sources, processes the aggregated data, and makes it available for client applications to query and use the data. You need to use durable entities to address such scenarios.


Durable Entities

One of the most important stateful patterns in computer science is the aggregator pattern. In the aggregator pattern, you want to implement something that receives multiple calls from the outside, aggregates all the data it receives in an internal status, and manages the status persistence.

If you want to implement this pattern using Azure Functions, you will run into some issues:

• Every function execution is different from the others. Every function execution is a different instance, potentially running in a different server. For this reason, you need to store its status outside your function, and you need to take care of the persistence layer.

• In this kind of scenario, the data order you receive is important, and you may receive lots of data in a small amount of time. Therefore, you need to guarantee the order of what you receive. Because you cannot avoid multiple instances of the same function related to the same aggregator running simultaneously, you need to manage the concurrency in the saving operation of the state. You need to manage this consistency at the persistence layer. Imagine you want to implement an Azure function that allows you to store the telemetry data from an IoT field device. The following figure shows the pattern.



We suppose that every device must have these features:

  • It receives several telemetries at regular intervals.
  • It must store and manage the telemetries in chronological order.
  • It can have an internal logic to analyze the telemetry in search of anomalies and, if necessary, send notification of the situation to external services.
  • It must expose APIs for retrieving telemetry and for setting configuration.
  • We can have devices with different behavior with the least possible impact on the code.

The Durable Functions platform provides us with special durable functions called durable entities, or entity functions. In the following paragraphs, we will see in detail the characteristics of these particular functions and how they can help us.

A durable entity is similar to an orchestrator. It is an Azure function with a special trigger (the EntityTrigger). Like an orchestrator, a durable entity manages its state abstracting the persistence layer.

You can consider a durable entity like a little service that communicates using messages. It has a unique identity and an internal state, and it exposes operations. It can interact with other entities, orchestrators, or external services.


Best Practices and Pitfalls to Avoid

You must follow design guidelines and implement best practices while building solutions for Azure Functions. Following best practices will ensure that you build efficient, robust, fault-tolerant, highly available, and highly scalable solutions. Azure Functions is a serverless service, and you do not have any control over the underlying hosting infrastructure and how each function will scale. So, you must design the solutions based on Azure Functions to run on the underlying infrastructure in an optimized manner and scale per the business requirements. Though you do not have any control over the underlying code, you can design your solution efficiently and utilize the underlying infrastructure and the hosting environment optimally. You just have to pick a design that best suits your business requirements. For example, if you need to build a long-running task that will run on Azure Functions, it may not be a good idea to host this long-running task on a Consumption Plan. The Consumption Plan can run your task for 10 minutes after which the execution will get timed out. In other words, you cannot run any code in an Azure function for more than 10 minutes on the Consumption Plan. You also need to decide on an efficient way to manage and monitor the functions executing your application code.

Mitigate Delay Startups

Azure functions execute when triggered. Once they complete execution, they go into an idle state and finally into a sleep state. Azure functions wake up and start executing only when they are triggered. However, the functions do not start executing instantaneously when triggered. They take some time to wake up from the sleep state and get warmed up before they can start processing the request. As mentioned, this phenomenon is referred to as the cold-start phenomenon. If you are building a real-time application, then the function needs to be active at all times. It must execute as soon as it gets invoked. You must have at least a warmed-up instance that can start execution as soon the function gets triggered. For all such scenarios, you cannot use Azure functions running on a Consumption Plan. You may choose to design an alternative solution using Azure WebApp or some other service that will take care of the real-time needs of the application. Alternatively, you can choose to run your Azure functions on a Premium Plan. The Premium Plan ensures that at least one of the instances is always up and ready to serve an incoming request.

For example, say you are building an Internet of Things (IoT) application that monitors the temperature of a room. If the temperature goes beyond a particular limit, the IoT application needs to invoke an Azure function that can instantly generate a notification or a warning. You cannot run this Azure function on the Consumption Plan as you may encounter a delay in starting up the Azure function when triggered, and this will delay the generation of the notification. If the notification gets delayed, then there can be severe consequences for the apparatus and systems in the room being monitored. To address such scenarios, you should use a Premium Plan or look for an alternative PaaS service to generate notifications for the system on demand.


Get the Correct Bill to Fit Your Budget

Cost planning is an essential aspect while designing a cloud-based solution for your application. In Azure Functions, you do not have control over the scaling aspects of the hosted application. New instances get added on the fly when the incoming load increases, and you get billed for all the instances that get added. However, you get billed for the period when the function executes on that instance. You may have a scenario when your application needs to scale exponentially during peak hours. In such scenarios, new instances get added on the fly to handle the incoming traffic, and your cost spirals exponentially. You may not have factored in the exponential scaling while designing the Azure function. In such scenarios, you must consider all the scaling scenarios and deduce the actual cost incurred for the Azure function. You may also choose options to control the degree of scaling and plan the number of instances the Azure function can scale so that the cost incurred for the Azure function is well within the limit.

Handle Long-Running Code

Azure functions are best suited to host code that executes for a shorter duration. However, you may have scenarios where you need to run your code for a longer duration. You should consider breaking the code into smaller chunks and running each of these functions in an Azure function in such a scenario. You may have a scenario where you need to run a long-running application. For example, you need to run a polling application that executes for a longer time interval. You may have challenges when splitting such applications into smaller chunks. You can choose to run the Azure function on a Dedicated Plan that allows code to run for a longer time interval to address such scenarios. Alternatively, you can choose to run the Azure function on a web job as a background process or WebApp.

Identify and Manage the Bottlenecks

Azure functions are serverless components, and the underlying infrastructure takes care of all the scaling needs. The functions can scale to a considerable amount automatically to manage the incoming loads. However, the Azure functions may interface with other PaaS-based or IaaS-based services that can scale within a particular limit. For example, you have an Azure function running on the Consumption Plan. The Azure function inserts data into an Azure SQL Database instance. During peak hours, a large number of concurrent requests hit the Azure function, and the function scales to a large number of instances to handle the incoming traffic. Each of these function instances may hit the Azure SQL Database instance at the same time. Azure SQL Database may not be able to scale to that extent and handle the incoming traffic. This action will result in a performance bottleneck for the Azure SQL Database instance. Even though the Azure function can scale and handle the incoming load, the SQL Database instance cannot scale to that extent. Your solution as a whole incurs performance bottlenecks. You must identify all such scenarios and implement strategies to handle them. You may need to control the degree of concurrency for the Azure functions using queuing mechanisms. You can add the items to be processed in a queue and then send a finite number of items to the Azure function for processing to avoid spinning out a large number of instances while scaling out to manage the incoming load.


Make Your Solution Fault Tolerant

You must make your solution fault tolerant. If the Azure functions fail to process the request, you must have mechanisms to process the request again. You should have a robust retry mechanism in place. The retry count should be a finite number and easily configurable for the solution. For example, if the function fails to process a request, it should send the failed requests to a queue and accumulate all the failed requests. After a specific time interval, it should pick up items for the failed queue one by one and process the request. You may have a scenario where the Azure function picks up an item from the queue and processes the item. The Azure function may encounter an issue where the function will pick up the item but cannot process it due to the unavailability of another dependent service. This action will result in the function picking up the items from the queue and not processing them. You end up losing all the items in the queue, and none of the items is processed. In all such scenarios, you must have a circuit breaker pattern implemented. If the function fails to process the item in the queue, it should not pick the next item in the queue. It should not pick any item until the depending service is up and the items can get processed. Also, it should add the failed item in another queue to retry processing it later.


Secure the APIs Developed Using Azure Functions

You build APIs using HTTP-triggered Azure functions. These APIs may perform a wide range of actions that can be either simple CRUD operations or complex business logic processing. You must secure these APIs to prevent unauthorized access. The best way to secure them is by integrating the HTTP-triggered Azure functions with the Azure API Management service or an alternate third-party service similar to this. All your requests for these functions will get routed through the API Management service. You can configure and manage the request and response parameters in the header and the body, configure CORS settings, decide on whom to allow and whom not to allow, and do many such activities using the API Management service. You can even integrate the API Management service with Azure Active Directory and perform OAuth-based authentication.

Facilitate Efficient Monitoring and Debug Failures

You must ensure that you integrate Application Insights or any other alternative monitoring and logging third-party solution with Azure Functions. Logs and metrics help you to debug applications. In the case of Azure Functions, you do not have access to the underlying code hosting infrastructure. So, you must log all information and failures to figure out the root cause and use the logs for audit activities. Application Insights provides an efficient mechanism to capture logs and metrics. You get rich visualizations of the metrics and logs that help you analyze your application, hosting infrastructure, and hosting environment with ease.

Post a Comment

Contact Form