Hamburger Hamburger Hamburger

What is Serverless Computing?

What is Serverless Computing?

Serverless computing is a form of cloud computing where resources are provisioned on-demand and in exact units based on precise usage, unlike traditional cloud computing which allocates chunks of resources that are consumed or left unused.

For billing, this entails a very precise payment model based on usage—if the serverless application requests 3.74GB of RAM, then that is what is precisely billed. In this way, resources are dynamically pulled from a massive resource pool that serves many, but ensures resource optimization.

Currently, serverless is the top most layer of the cloud computing stack, following the other 3 major cloud computing models: SaaS, PaaS, and IaaS. Like the other under layers which provide a managed resource, serverless cloud providers provide a layer that abstracts servers away from DevOps. Because cloud providers fully manage a gamut of responsibilities including provisioning, scheduling, scaling, patching, security, etc., developers using serverless are able to focus on their DevOps solely, and free of servers. In fact, this is where “serverless” gets its name, in the idea that developers are emancipated from managing back-end operations.

Serverless is often associated with microservices and Function-as-a-Service (FaaS). Generally, the context of this association encompasses the three serverless characteristics below. Together they support a serverless environment. In one instance, this may look like a platform like WordPress that manages a website backend, while providing a separated website content management dashboard for users.

Technically, the three main characteristics that define serverless environments is what sets it apart from other cloud computing models. While scaling is a fourth common benefit in the cloud.

  • Stateless environment — Stateless servers retain no information. Any data to be saved must be sent to a database or storage device.
  • Event-driven compute containers — The application is triggered by events, or change in the system. Upon activation, a container spools up to the exact function to execute the event request.
  • Ephemeral runtime — Event-driven compute containers often invoke for just the time they are used, and then disappear.

Serverless computing is also considered a cloud-native development model, meaning apps are developed using the technologies native to cloud environments, and eliminating the use of on-premise servers, hence serverless. This requires a different programming paradigm for developers than traditional methods like those used to develop monolithic programs.

Serverless computing pros and cons

Each cloud model overlaps in capabilities, an accounting app can be built atop a PaaS system, or sourced as a SaaS solution. Likewise, an accounting app can be built in a serverless environment. Serverless has many pros that enable teams to operate with more agility, transparency, and cost controls.

  • Returns Time to Developers — Emancipates time for developers by abstracting back-end server management away from application development.
  • Cost Controls — Consumers are charged on executions only; compared to other payment models, which may charge per virtual machine.
  • Multi-Language Support — Serverless platforms can support multiple programming languages, but are well suited to support event-driven languages. However, newer developments, like Google Cloud Run, allows any language that can run within a container. And AWS lambda Layers makes allowances for bringing code written in other languages into a code base.
  • Reduction of Complexity — Serverless, by its nature, reduces responsibilities in DevOps cycles.
  • Transparent Usage — Serverless provides transparency into system usage. Typically, a dashboard offers a unified look at usage of all applications and services deployed across the organization.
  • Not Cost Effective in All Situations — Serverless provides significant cost controls, namely charging only for usage, which is beneficial in peak times. However, predictable workloads, for say long-running processes, may be better served by traditional server environments, when traffic is well understood.
  • Cold Starts — Serverless architectures do not use long-running processes, instead they offer resource provisions on-demand. This means that sometimes resources will start cold in order to respond to a request. For most systems this latency may not be a problem, yet for time sensitive applications this delay may be unacceptable.
  • Monitoring and Debugging — Monitoring and debugging require a shift in thinking in a serverless ecosystem because serverless architectures, especially those using microservices, present different operational challenges.
  • Vendor Lock-in — As with any cloud provider, vendor lock-in is a concern. Serverless providers give consumers an ecosystem of functionality, however, this requires deeper and deeper integrations to create more value for the application. As apps become more integrated they run the risk of deeper lock-in.

Serverless cloud architecture

Serverless cloud architectures are design patterns that break down business logic into functional units according to some abstraction paradigm intent on freeing up server management. Three common paradigms are offered by leading cloud providers like Amazon and Google.

  • Function-as-a-Service (FaaS) — The Function-as-a-Service model can be put in place between Platform-as-a-Service and Software-as-a-Service. It’s not a bare bones development platform, and it's not a full-fledged software package.

    In FaaS, developers have access to ready to implement frameworks of functionality. Application development centers on the idea that when requests are made, a container with the exact necessary function and resources is invoked, then when not needed, cleaned up. In this way, FaaS and Serverless are often used interchangeably, but Serverless refers to a third-party managed cloud environment with form-fitting provisioning, and FaaS refers to the event-driven architecture itself.

  • Mobile Backend-as-as-Service (mBaaS) — mBaaS, also just Backend-as-a-Service (BaaS), provides APIs for mobile developers to link to cloud services, such as cloud storage, user authentication, push notifications, etc. BaaS uses FaaS concepts, such as function-fitting containers to support a framework specific to cloud services.
  • Serverless Database — Databases can be abstracted using FaaS concepts, helping to eliminate the operational overhead of deploying and managing databases. They automatically scale database compute and storage resources based on real-time demand.

Serverless vs. microservices

Microservices architecture refers to a design pattern where applications are broken down into smaller services, or microservices. The combination and intercommunication of these microservices constitutes the whole application.

There are many good reasons to choose microservices. First, in response to an alternative design pattern, the monolith, where applications contain all of the code, because a monolith can grow to become unwieldy, development teams are challenged to maintain the code base. Monoliths are not ideal for cloud native applications that are designed to responsively and rapidly expand and contract services to meet demand.

Second, microservices are ideal for the containerized operations of cloud architectures. Containers are smaller than virtual machine runtimes, and require less than they do, in both resources, and application overhead. Containers are well suited to holding microservices. They can come into existence when more of the same service is needed, and they can disappear when they’re not needed, saving resources.

Continuing with the concept of wrapping runtime environments around services, effectively containerizing them, we can further shrink a container to just encompassing and running a single function. In this way, when a function is invoked, a container comes up, runs the function and then closes. These kinds of containers are called stateless containers and support the Serverless event-driven model. This is in contrast to stateful containers which spin up and remain for a longer duration, which does not adhere to the ephemeral characteristic of Serverless.

In sharp contrast, serverless and microservices are only associated because they are within the cloud ecosystems, but as serverless offers developers a way to outsource backend responsibilities, microservices is a development design approach.

Serverless computing use cases

Serverless computing requires a cloud-native approach to developing applications. Serverless applications are decoupled, stateless, and contain the least amount of code necessary. As such, nearly any use case that needs to leverage the power of cloud technologies are open for development using serverless.

The short list here will demonstrate the varieties of use cases that serverless features enable.

  • Multimedia Processing
  • Database Interfacing
  • IoT Sensor Messaging
  • Scaling Streaming Processes
  • Multiplying Chat Bots
  • Batch Jobs / Batch Scheduling
  • Mobile Backend-as-a-Service
  • Serverless Database
  • Continuous Integration/Continuous Development (CI/CD) pipelines
  • Webapps
  • Multi Language Applications

Kubernetes for serverless environments

Because serverless environments utilize containers, Kubernetes is a common choice for running serverless environments. Kubernetes is not ready out of the box to run serverless environments. Instead, Red Hat Knative, an open-source project can be used to deploy code to a Kubernetes environment.

According to Red Hat, Knative consists of 3 primary components:

  • Build - A flexible approach to building source code into containers.
  • Serving - Enables rapid deployment and automatic scaling of containers through a request-driven model for serving workloads based on demand.
  • Eventing - An infrastructure for consuming and producing events to stimulate applications. Applications can be triggered by a variety of sources, such as events from your own applications, cloud services from multiple providers, Software-as-a-Service (SaaS) systems, and Red Hat AMQ streams.

Knative is evolved over earlier serverless frameworks by allowing the deployment of any workload—monoliths, microservices, or functions.

{ "FirstName": "First Name", "LastName": "Last Name", "Email": "Business Email", "Title": "Job Title", "Company": "Company Name", "Address": "Address", "City": "City", "State":"State", "Country":"Country", "Phone": "Business Telephone", "LeadCommentsExtended": "Additional Information(optional)", "LblCustomField1": "What solution area are you wanting to discuss?", "ApplicationModern": "Application Modernization", "InfrastructureModern": "Infrastructure Modernization", "Other": "Other", "DataModern": "Data Modernization", "GlobalOption": "If you select 'Yes' below, you consent to receive commercial communications by email in relation to Hitachi Vantara's products and services.", "GlobalOptionYes": "Yes", "GlobalOptionNo": "No", "Submit": "Submit", "EmailError": "Must be valid email.", "RequiredFieldError": "This field is required." }