Serverless Architecture

You are currently viewing Serverless Architecture

Serverless computing, also referred to as simply Serverless, is a famed topic of the software architecture world. The biggest cloud-providing platforms like Amazon, Google and Microsoft are continuing to invest heavily and research on this type of architecture. Serverless is a cloud computing code execution model where a certain cloud provider takes over responsibility for servers running along with computing resources management. To put it in a simple view, there are no physical or virtual servers to manage, because they are automatically deployed in the cloud by the third-party vendors. Some of the famous serverless providers are AWS Lambda, GCP Functions and Azure Functions.

The main difference between the traditional and serverless architecture can be schematically seen in diagram 1.

Diagram 1: Traditional Vs. Serverless

Components of the serverless application are:


Functions are what imply business and application logic. They execute code for certain actions. There can be multiple functions working as a composite function. Sometimes they may be triggered or self-initiated when a condition is met. This process is hidden from the user. The user creates a program code, specifies needed dependencies and uploads it. The provider then provides the computing environment.


This unit allows the development to shift backend regular functions from data storage to the cloud, managing latency of the architecture and dependencies deepening. It also aids with functionalities like Simple Notification Service (SNS), Simple Queue Service (SQS), kinesis for streaming data, data warehouse queries, batch data cloud dataflow and event grids.  

Here is how a serverless setup looks like.

Diagram 2: Serverless State

When we talk about Serverless, it is primarily defined by two different but overlapping areas: “Backend as a Service” (BaaS) and “Function as a Service” (FaaS). BaaS and FaaS are related in their operational attributes and are frequently used together. Let’s find out what each of these scopes encompasses.


This term is used to describe applications that significantly or fully incorporate third-party, cloud hosted applications and services to manage server-side logic and state. These are typically rich client applications like single page web apps or mobile apps that use the vast ecosystem of cloud accessible databases (e.g., Firebase), authentication services (AWS Cognito) and so on.


This term is used to describe applications where server-side logic is written by developer but unlike mainstream architectures, its run-in stateless computer containers are event-triggered, ephemeral and managed by a third party. Containers specify OS and software configurations. Regarding uptime, the container keeps working until the current job is executed and destroyed. It can host any microservice and app and there are no timeout restrictions.

Faas, replaces the click-processing physical server with something that does not need a provisional server nor a related app. The only code that needs to change when moving to FaaS is the main method, also called startup code. Handlers which manage message listener interfaces are also changed in the method signature.

Deployment is very different from traditional systems. The code is uploaded to the FaaS provider and then the provider instantiates VM’s and manager process. Horizontal scaling is completely automatic, elastic and managed by the provider. If the system needs to process a hundred requests in parallel, computer containers come into play. They execute the functions and are created and destroyed purely driven by runtime need. Functions in FaaS are typically triggered by event types defined by the provider. Such event types include S3 (file/object) updates and Kinesis. They may also be triggered as a response to inbound HTTP requests via API gateway. However, this is an uncommon use of configuration from within the same cloud environment.

When working in an instance-bound state or a machine state, there may be some restrictions posing to FaaS. Like data that is stored in variables or local disk. Such storage is available but there is no guarantee that such a state persists across multiple invocations and more strongly there should be no assumption made that state from one invocation of a function will be available to another invocation of the same function. This is the reason why FaaS are often described as stateless. But this may not a correct techical assessment. Rather it is appropriate to say that any state of a FaaS function that is required to be persistent needs to be externalized outside of the FaaS function instance. Any function will typically make use of a database, a cross app cache like Redis or a network file/object store to store state across requests or to provide further input necessary to handle a request.  

New-retail business framework (NBF) is an open framework developed by the supply chain infrastructure technology team for new retail services. A serverless system like FaaS helps NBF achieve industry standard practices, load balancing and long-term sustainability. Diagram 3 shows how NBF works in conjunction with FaaS.

Diagram 3: NBF-FaaS Architecture

Event Sources Function is an event driven set. Function Instances provide functions or microservices. The FaaS Controller manages function control services such as API Gateway or Backend for Front (BFF). Platform Service Functions relay on platform services such as privilege management API’s and OSS. Cloud Service Engine (CSE) provides dynamic scaling capabilities which rapidly adapt to different resources needs during on and off-peak hours. CSE optimizes container cold, hot start performance and serverless O&M tools such as logging, monitoring and link tracking. NBF containers use Open Service Gateway Initiative (OSGI) architecture and provide complete bundle lifecycle management, including load, start, unmount and delete as well as isolation and communication. The whole platform capabilities result in services release which allows for quick release of a function or to bundle as a service, service routing which includes polymorphism, degradation and moke routing, service management with service vision control and service start/stop and lastly the service O&M for serverless services, hybrid deployment, phased release, disaster recovery.

Lastly the important concept to grasp is bundle communication. Bundles are able to utilize container capabilities such as Spring context hosting and AOP by using the import mechanism provided by containers. For example:

       <package>org.aspectj</package>    </packages>

The NBF containers create an independent sandbox for each bundle which ensures code level bundle isolation from time of loading thus preventing class and resource conflicts among bundles. They manage bundle context globally and provide plug-in hosting. This sums up how FaaS works and conjoins.