Which one to Choose – Azure Durable Function or Logic App?

If you are architecting a solution that requires serverless and stateful workflows using Azure Platform then there are two choices you have

  1. Azure Durable Functions
  2. Logic Apps

Azure Durable Functions is a new programming model based on Microsoft serverless platform Azure Functions. It allows you to write a workflow as code and have the execution run with the scalability and the reliability of serverless with high throughput.

Azure Logic Apps is a cloud service that helps you automate and orchestrate tasks, business processes, and workflows when you need to integrate apps, data, systems, and services across enterprises or organisations. Logic Apps simplifies how you design and build scalable solutions for app integration, data integration, system integration, enterprise application integration (EAI), and business-to-business (B2B) communication, whether in the cloud, on premises, or both.

In this blog post, I discuss the main differences between these two event-driven Azure services and I provide some guidance to help you to make the right decision.

Feature Comparison

Let us compare different features supported by these two platform like different workflows, connectors, logging and exception handling and few more. Below comparison table will put more lights on these features:

Durable Functions Logic Apps
Connectors or Bindings The list of supported bindings is here. Some of these bindings support triggering a function, or are inputs or outputs. The list of bindings is growing, especially for the Functions runtime version 2.

Logic Apps provide more than 200 connectors, and the list just keeps growing. Among these, there are protocol connectors, Azure Services connectors, Microsoft SaaS connectors, and third-Party SaaS Connectors.

Actions Can orchestrate Activity Functions (with the ActivityTrigger attribute). However, those Activity Functions could call other services, using any of the supported bindings.

Additionally, orchestrations can call sub-orchestrations.

Many different workflow actions can be orchestrated. Logic Apps workflows can be calling actions of the more than 200 connectors, workflow steps, other Azure Functions, other Logic Apps, etc.
Flow Control The workflow’s flow is controlled using the standard code constructs. E.g. conditions, switch case statements, loops, try-catch blocks, etc. You can control the flow with conditional statementsswitch statements, loopsscopes and controlling the activity chaining with the runAfter property.
Fan-Out / Fan-In Pattern Functions can be executed in parallel and the workflow can continue when all or any of the branches finish. You can fan-out and fan-in actions in a workflow by simply implementing parallel branches, or ForEach loops running in parallel.
Async HTTP APIs and Get Status Pattern Client applications or services can invoke Durable Functions orchestration via HTTP APIs asynchronously and later get the orchestration status to learn when the operation completes. Additionally, you can set a custom status value that could be query by external clients. Client applications or services could call Logic Apps Management API to get the instance run status. However, either the client has to have access to this API or you would need to implement a wrapper of this.

Custom Status value is not currently supported out-of-the-box. If required, you would need to persist it in a separate store and expose it with a custom API.

Programmatic Instance management Client applications or services can monitor and terminate instances of Durable Functions orchestrations via the API. Client applications or services could call Logic Apps Management API to monitor and terminate instances of Logic App Workflows. However, either the client has to have access to this API or you would need to implement a wrapper.
Concurrency Control Concurrency throttling is supported. Concurrency control can be configured at workflow level or loop level.
Lifespan One instance can run without defined time limits. One instance of Logic Apps can run for up to 90 days.
Error Handling Implemented with the constructs of the language used in the orchestration. Retry policies and catch strategies can be implemented.
Orchestration Engine Orchestration functions and activity functions may be running on different VMs. However, Durable Functions ensures reliable execution of orchestrations. To support this, check-pointing is implemented at each await statement. Additionally, the orchestration replays every time after resuming from an await call until it reaches the last activity check-pointed to rebuild the in-memory state of the instance. For high throughput scenarios, you could enable extended sessions. In Logic Apps the runtime engine breaks down the different tasks based on the workflow definition. These tasks are distributed among different workers. The engine makes sure that each task is executed at least once, and that tasks are not executed until their dependencies have finished with the expected status.

Pricing

Even though both options offer a serverless option where you only pay for what you use, there are some differences to consider as described below.

Durable Functions Logic Apps
In the consumption plan, you pay per-second of resource consumption and the number of executions. More details described here. For workflows you pay per-action and trigger (skipped, failed or succeeded). There is also a marginal cost for storage.

In case you need B2B integration, XML Schemas and Maps or Liquid Templates, you would need to pay for an Integration Account.

Durable Functions can also be deployed on App Service Plans or App Service Environments where you pay per instance. At the moment there is no option to run Logic Apps on your dedicated instances.

It’s also worth mentioning that in most cases, the operation costs of Logic Apps tend be higher than those of Durable Functions, but that would depend case by case. And for enterprise-grade solutions, you should not pick a platform based on price only, but you have to consider all the requirements and the value provided by the platform.

Management and Monitoring 

Its really important to choose platform which we can manage and monitor effectively after we deploy our solution in Production. So management and monitoring also play key role while making decision and below points might help you in choosing between them:

Durable Functions Logic Apps
Tracing and Logging The orchestration activity is tracked by default in Application Insights. Furthermore, you can implement logging to App Insights. The run history and trigger history are logged by default. Additionally, you can enable diagnostic logging to send additional details to Log Analytics. You can also make use of trackedProperties to enrich your logging.
Monitoring To monitor workflow instances, you need to use Application Insights Query Language to build your custom queries and dashboards. The Logic Apps blade and Log Analytics workspace solution for Logic Apps provide very rich and friendly visual tools for monitoring.
Resubmitting There is no out-of-the-box functionality to resubmit failed messages. Failed instances can easily be resubmitted from the Logic Apps blades or the Log Analytics Workspace.

Deployment Process

Durable Functions Logic Apps
CI/CD Durable Functions builds and deployments can be automated using VSTS build and release pipelines. Additionally, other build and release management tools can be used. Logic Apps are deployed using ARM Templates only.
 Versioning Versioning strategy is very important in Durable Functions. If you introduce breaking changes in a new version of your workflow, in-flight instances will break and fail. You can find more information and mitigation strategies here. Logic Apps keep version history of all workflows saved or deployed. Running instances will continue running based on the active version when they started.
Runtime Azure Functions can not only run on Azure, but be deployed on-premises, on Azure Stack, and can run on containers as well. Logic Apps can only run on Azure.

Summary: 

Logic Apps are better suited when:

Building integration solutions and leveraging extensive list of connectors required. It reduce time-to-market and ease connectivity.

Visual tools to manage and troubleshooting workflow required.

Durable Functions are a better suited when:

The list of available bindings is sufficient to meet the requirement.

The logging and troubleshooting capabilities are sufficient, and you can build your custom monitoring tools.

You prefer to have all the power and flexibility of a robust programming language.

 

I hope this will help !!!

Troubleshooting application problems for Applications running on Azure

Troubleshooting application problems are difficult. It takes a lot of time. I would argue that it might be the thing that we developers spend most of our time. When your application is in production, it is even more difficult to find out what went wrong.

Traditionally, you could RDP into the server and open up the app logs, IIS logs or look at the event logs to get a hint of what went wrong. But where can you find that information if you are running your app in the cloud? Things are different in the cloud – there may not be a server to log into.

I recommend read more details on Azure log types and how to activate and use them effectively at https://stackify.com/azure-app-service-log-files/

Log files are useful and even when you are running in Azure, you have plenty of options to get information from log files.

It is difficult to get information from log files as you need to aggregate them and somehow analyze them. These are difficult problems that will slow you down when you are bug-hunting an issue in production.

I recommend using tools that visualise the information that is contained in your Azure logs. You do not have to enable the logs for this specifically as most of these tools capture this information automatically. Tools like “Application Insights” enable you to get an overview of the health of all your applications, including information that is contained in the log files and more. Using tools like these also enable you to be notified of exceptions so that you can go bug-hunting proactively.

Application Insights

Application Insights is an extensible Application Performance Management (APM) service for web developers on multiple platforms. Use it to monitor your live web application. It will automatically detect performance anomalies. It includes powerful analytics tools to help you diagnose issues and to understand what users actually do with your app. It’s designed to help you continuously improve performance and usability.

Read more…

What you can do with Application Insights?

16 Things Every Developer Needs to Know About Application Insights

Application Insights is Microsoft’s lightweight application performance monitoring service. I have collected a nice list of things that every developer should know, including tips, key features, and limitations.

Read more…

Structured logs with Serilog using Application Insights

Serilog is library to gather structured logs from our application. We can log both exceptions and other events that happened somewhere. I really like it because it’s simple and I can easily serialize and include some objects (ie. method arguments) into logs and easily browse and query them.

Read more…

I hope this will help !!!

.NET Core 2.1 – Use HttpClientFactory to implement resilient HTTP requests

The original and well-know HttpClient class can be easily used, but in some cases, it is not being properly used by many developers.

As a first issue, while this class is disposable, using it with the using statement is not the best choice because even when you dispose HttpClient object, the underlying socket is not immediately released and can cause a serious issue named ‘sockets exhaustion’.

Therefore, HttpClient is intended to be instantiated once and reused throughout the life of an application. Instantiating an HttpClient class for every request will exhaust the number of sockets available under heavy loads. That issue will result in SocketException errors. Possible approaches to solve that problem are based on the creation of the HttpClient object as singleton or static.

But there’s a second issue with HttpClient that you can have when you use it as singleton or static object. In this case, a singleton or static HttpClient doesn’t respect DNS changes.

To address those mentioned issues and make the management of HttpClient instances easier, .NET Core 2.1 offers a new HttpClientFactory that can also be used to implement resilient HTTP calls by integrating Polly with it.

Read more…

I hope this will help !!!!

 

AWS API Gateway – API Creation with Lambda Proxy Integration and Web Page Redirection (302) using .NET Core 2.0

In this article, we will see how to create and test an API with Lambda proxy integration using the API Gateway console. We will also see how a Lambda backend parses the query string request and implements app logic that depends on the incoming query string parameters. we will also see how to create 302 Redirect response using Lambda function and redirect caller directly to web page.

Topic to cover in this article

  1. How to create Lambda function for Lambda Proxy Integration using .NET core 2.0?
  2. How to create Resource?
  3. How to create HTTP GET Method with Lambda Proxy Integration?
  4. How to setup Request query string parameter?
  5. How to setup Response redirect (HttpStatus=302) with Location header parameter?

Step-1: Create Lambda function for Lambda Proxy Integration using .NET core 2.0

Create New AWS Lambda Project “ApiLambdaVerifyEmail” using Visual Studio 2017 version 15.5.6:

01-Create-LambdaFunction-DotNetCore

Add following NuGet Packages for Lambda function which will be needed for API Gateway Proxy Request and Response and other Lambda features:

02-NuGet-Packages-Used.JPG

Update Function Handler Code with below code snippets:


public APIGatewayProxyResponse FunctionHandler(APIGatewayProxyRequest input, ILambdaContext context)
{
    //Ready query string parameter
    string queryString ;
    input.QueryStringParameters.TryGetValue("pageKey", out queryString);

    //Set Default URL if no match found
    string redirectUrl = @"https://ramanisandeep.wordpress.com/";

    if(!string.IsNullOrEmpty(queryString))
    {
        Console.WriteLine("pageKey : " + queryString);
    }

    //Use Query String Parameters to do some DB Operation 

    //Based on database operation redirect page to X Web Page or Y Web Page or Z Web Page
    switch (queryString)
    {
        case "google":
            redirectUrl = @"https://www.google.co.in";
            break;
        case "twitter":
            redirectUrl = @"https://twitter.com";
            break;
        case "sandeep":
            redirectUrl = @"https://ramanisandeep.wordpress.com/";
            break;
    }

    Console.WriteLine("URL : " + redirectUrl);

    //Redirect to Web Page using 302 Response Code and URL
    var response = new APIGatewayProxyResponse
    {
        StatusCode = (int)HttpStatusCode.Redirect,
        Headers = new Dictionary { { "location", redirectUrl } }
    };

    return response;
}

 

What above code does is:

  • Read query string parameters – pageKey
  • Based on query string parameter values – set redirect URLfor the response.
  • Create APIGatewayProxyResponse object with page redirection (302) Status code and add header location for the page where to redirect.

Publish Lambda function by right clicking Project on solution explorer

03-PublishToLambda-Step-1

Next, It will display Upload Lambda function popup where you can select/define your profile, region and Lambda function name and press Next button:

04-Upload-Lambda-function

Next, It will display popup to select permissions for Lambda function you are uploading and once you select Role then press Upload button:

05-Upload-Lambda-function-2

Note: Create Role for Lambda function if you haven’t created yet using IAM console as per your Lambda function Access Requirements. For this article we are not accessing any other AWS Service from Lambda function so no role as such required.

Next step is to create API Gateway API with Lambda Proxy Integration.

Step-2: Create Resource – TestRedirectPage :

How to setup Resource guide from AWS?

1-New-Resource-Creation.JPG

Step-3: Setup GET Method with Lambda Proxy Integration: 

How to setup HTTP Method guide from AWS?

2-Setup-Get-Method-For-Lambda-Proxy-Integration

3-Add permission to lambda function-confirmation

After GET Method configuration is saved it will show all request / response configuration on screen:

4-Method-Execution.JPG

Step-4: Setup Query String Parameters for API Gateway – pageKey:

5-Add-QueryString-Method-Request.JPG

Verify GET Integration Request Type is LAMBDA_PROXY:

6-Get-IntegrationRequesst.JPG

Step-5: Setup GET Method Response redirect (HttpStatus=302) with Location header parameter:

When setting up the API Gateway Method Response, start by declaring the status code in the Method Response section. 200 will already be there so delete it. Add your 302 here instead and add the Location response header.

7-Get-Method-Response.JPG

After above all configurations are done, Method Execution will look like:

8-Method-Execution-AfterConfigurations.JPG

Test GET Method we configured by clicking Test link.

  • Set Query Strings textbox “pageKey=sandeep” and press test button.

9-Get-Method-Test.JPG

Deploy API

  • Select GET method
  • then click on Actions button on top
  • then select deploy api option
  • select/create Dev stage and press Deploy button

10-Deploy-Api-Dev.JPG

Get Invoke URL from Dev environment

  • select GET method of testredirectpage resource
  • then it will display invoke URL
  • by using this URL we can hit API gateway resource and execute method from browser or Postman.

11-InvokeUrl-Dev-Stage.JPG

Run this Invoke URL in Browsser by Adding Query String pageKey=sandeep

12-Test-ApiUrl-With-querystring-2-input

Browser will invoke GET API method of testredirectpage resource on API gateway and then run our lambda function endpoint, then lambda function will parse this query string value, then send response to redirect Url=https://ramanisandeep.wordpress.com , then browser will redirect https://ramanisandeep.wordpress.com page.

13-Test-Output-result-sandeep.JPG

Similarly, you can test with other Query String parameters and see which web page opens 🙂

I hope this is useful. 

Important Facts of Asynchronous Programming in .NET

Modern apps make extensive use of file and networking I/O. I/O APIs traditionally block by default, resulting in poor user experiences and hardware utilization unless you want to learn and use challenging patterns.

Task-based async APIs and the language-level asynchronous programming model invert this model, making async execution the default with few new concepts to learn.

Async code has the following characteristics:

  • Handles more server requests by yielding threads to handle more requests while waiting for I/O requests to return.
  • Enables UIs to be more responsive by yielding threads to UI interaction while waiting for I/O requests and by transitioning long-running work to other CPU cores.
  • Many of the newer .NET APIs are asynchronous.
  • It’s easy to write async code in .NET

One of the main advantages of using asynchronous methods is with I/O-based code.

By doing an await, you can let a thread be reused to do other work while the I/O is in flight.

The biggest misunderstanding about Asynchronous programming in many developers is asynchronous method automatically spawns a new thread, and that is not the case.

Recent improvements Microsoft has made towards Asynchronous programming is “Generalized Async Return Types

This means that you’re no longer limited to using Task or Task<T> as the return type for an asynchronous method. As long as the return type satisfies the asynchronous pattern, you’re good. Using this new ValueTask, you can avoid memory allocations, which can help in addressing performance issues.

Top Nuget Packages that Microsoft recommend for asynchronous programming are “System.Collections.Concurrent” and “System.Collections.Immutable“.

System.Collections.Immutable provides collections that allow a developer to use a collection (e.g. ConcurrentBag or ConcurrentDictionary) in a concurrent fashion safely. Therefore, the developer doesn’t need to do their own locking mechanisms to use the collection. Immutable collections allow developers to share collections safely because updates are only seen by the code that made the update.

Avoid void as a return type for asynchronous methods at all costs.

The only time it’s valid to do this is with an event handler. Otherwise, asynchronous methods should always return a Task type.

Immutable structures are very important when working with concurrent programming.

The main advantage with immutable data types is that you can share them across multiple tasks without worrying about synchronizing their contents. By their definition, they’re immutable, so if a task decides to “change” an immutable object, it will get a new version of that object. If another task has a reference to the original immutable object, it doesn’t change.

Take advantage of C# features to write immutable structures.

This means defining fields with the readonly keyword and properties that only have getters (not even private setters). Also, C# now allows you to specify a method parameter with the in keyword, which means that reference cannot be changed.

That’s all for now. I hope these facts will help you while doing Asynchronous Programming for your projects 🙂