Our responsibility as technical consultants at InterWorks is to give our clients the best technical solution for their problem, but also to make sure that the exact solution will save money. The solutions should give our clients a chance to spend less money, but get more value.
Azure functions and durable functions as core of serverless in Azure are quite good solutions for us and our clients. It means deploying our code without having to worry about the infrastructure. We don‚Äôt define how many servers will be needed in order to execute our function, someone else takes care about it. We only pay for the time our function is actually running (using Consumption-based pricing model).
Mark Mason said ‚ÄúBuilding smart processes to streamline the workflow can make the work easier and the results more reliable, which keeps my head above water and my clients happy‚ÄĚ.
So, durable function offers us these two quite important things: paying less and getting the workflow of our code.
Azure function is known as Function as a service (FaaS). It allows deploying individual functions, small pieces of code that respond in various types of event.
Azure function has a trigger that executes this piece of code. There are more types of triggers: timer (e.g. run a function every 1 hour), queue message (run the function every time the message appears on a particular queue), HTTP request (run a function every time a particular http end-point is called).
An Azure function can have many different input and output bindings. It can be integrated with many other services: Blob storage, CosmosDB, SendGrid (sending e-mails or text messages) etc.
As I mentioned before, Azure function is just an independent piece of code that is deployed and triggered. Using just Azure function to understand the workflow is quite difficult, so we need to read designs or some files in order to understand the workflow. Azure added an extension of azure function known as durable function which contains workflow and makes our life easier.
What is a durable function and its key concepts?
Durable function gives us a chance to write stateful functions in a serverless environment. ¬†When we use durable function we create 3 types of functions: starter function, orchestrator function (sub-orchestrator function) and activity functions.
Starter function can be queue triggered. It calls orchestrator function which defines workflow. Orchestrator function doesn‚Äôt perform any action itself like calling APIs or writing to DB, instead of that it delegates all of the action steps in the workflow to the activity functions (a regular azure function). When a new workflow is initiated an orchestrated function is called and it triggers the first activity function. After that the orchestration function goes to sleep. When the first activity finishes, the orchestration function wakes up and carries on for where it left off, calling the next activity in the workflow. The activity function can receive input data from the orchestration function and can return data to it.
We have all the logic in one place with durable function. We know which step follows the previous etc. With the durable function we can see the big picture at one place!
What will happen if we get an error? How should we handle that?
Handling errors is important if we want to write a good code. There are two approaches for handling errors in durable functions.
- In the activity function we can put try-catch block and if any exception is throwing an error it will return object that indicates failure. In this approach the orchestration function could check success flag in the returned object and if it is true then it will continue with the workflow, in other cases it will abandon the workflow.
- The Activity function will throw a handled exception. In this approach we put exception handlers in our orchestration function where we will catch exception from our activity functions. Here we simply catch any exception thrown by any of the activities and dive with it accordingly. So, we can call another activity in our exception handler in orchestration function. With the new activity we can revert all the activities that were done by a previous activity function.
What will happen if the activity function fails, but the action is quite important to be done in a short period of time? There is an easy solution for that with Durable Functions – retry mechanism.
In the workflow we can have retry steps if the activity fails during some time of transient issue. There is a special method for retries purpose called ‚ÄúCallActivityWithRetryAsync‚ÄĚ. This method is quite powerful, because it allows us to specify how many times to retry and in which interval to do that.
There is also possibility to create a custom function in order to use retry mechanism when we get concept exception. This is quite important for a cloud based environment, because we want to distinguish between transient errors (such as: network connectivity, time out) which may go away if we retry and permanent errors which do not go away, they retry infinitely.
Fan in/out feature
Time is money and we don‚Äôt want to lose money! Is there a way to improve our approach to save more time, when we already have serverless? There is something else that can help us regarding this.
Azure durable function allows us easy implementation of powerful workflow pattern known as fan in/out pattern. Sometimes more functions can run parallel in a workflow. ¬†All parallel functions can be triggered, and when they are all finished the rest of the workflow can continue. We usually choose to execute some functions in parallel instead of running them one by one, because it will take more time.
Fan out carries on to execute all parallel functions (without durable function it can be done with queue triggered azure functions).
Fan in takes care if the last one parallel task is done and the result is stored. If it is done, then the next action in the workflow should be triggered (without a durable function it is a lot harder to implement).
Azure function allows running pieces of code by need and thanks to that we can save a lot of money. The durable function as an extension of the azure function gives us a chance to define our workflows in code. Moreover, it gives us a chance to see the big picture of what the whole workflow does, instead of looking across multiple functions to understand the workflow. There is also possibility for retry activities to activity functions and sub-orchestration functions. Handling errors is one of the features of durable functions and is possible for: the workflow as a whole, activity functions and sub-orchestrations. Fan in/out pattern of the durable function opens another possibility – it is a parallel execution. Sub-orchestration as part of the durable function makes it easier to read code and make upgrades more reliable.