What are Cloud Native Applications? | Hitachi Vantara
en_in
Hero Image

What are Cloud Native Applications?

What are cloud-native applications?

Cloud-native applications are applications designed and developed with the advantages of cloud technologies in mind, rather than being built with traditional approaches that were the standard before the Cloud was established. These applications use cloud capabilities to enable rapid development, optimization, and interconnectivity. This means that companies can bring products to market faster, discover insights easily, solicit feedback from users, and then roll out improvements rapidly.

To aid this flexible development process, cloud-native apps tend to be built in small pieces, utilizing microservices to complete tasks while remaining lightweight themselves. Cloud-native apps are different from cloud-based apps in that they are built specifically to fully use cloud features rather than just a few cloud services or resources.

Comparatively, legacy applications built with traditional software development methods tend to be monolithic and opaque and run on a single system. Often these systems are “snowflakes,” with hardware infrastructure growing in tandem with the enterprise application into a unique and uncoupled configuration. These legacy apps further run into the challenge of obsolescence, at which point organizations can reflect on the need to migrate their on-premises apps to the cloud.

Cloud-native applications rely on leveraging DevOps, microservices, containers, and continuous integration/continuous delivery (CI/CD) methods, to redefine how developers achieve business goals by changing the use of traditional on-premises infrastructures to the use of cloud environments. For many reasons, costs, security, flexibility, innovation, cloud-native apps are the future for business-critical applications.

What is cloud-native?

While cloud-native apps leverage cloud technologies to deliver rapid value, the term ‘cloud native’ refers to the ideology around developing and running applications within a dynamic distributed computing environment, like a private cloud, public cloud, or hybrid cloud.

The overarching design approach in cloud-native is to support applications for a CI/CD style process. Microservices, containers, APIs, and dynamic orchestration are the key cloud capabilities that enable continuous development. Thanks to advances in technology, cloud development considerations will give birth to new, flexible design principles. Today, the current principles used to guide developers when building cloud-native applications are:

  • Single Concern Principle — Derived from the “single responsibility” principle, which dictates that classes should have only one responsibility. In cloud-native, the Single Concern Principle says that every container should have but one concern and perform it as best it can.
  • High Observability Principle — The High Observability Principle is akin to the “black box principle” that says an algorithm's inner workings should not be exposed. Rather, in cloud- native, APIs are a prerequisite, making containers highly observable. This requirement is necessary so the system can monitor the state of the containerized app, while the insides remain hidden.
  • Lifecycle Conformance Principle — Cloud-native apps come and go rapidly within the cloud environment as resources scale up and down. A container is created to serve a purpose for a short time and then is destroyed. For this reason, applications also need to prioritize system events, like a container kill command, from the managing platform so that they can react to them appropriately and clean up after themselves.
  • Image Immutability Principle — To enable rapid scaling, containerized applications must be immutable. If the containerized application undergoes an update or patch, then a new updated container image must replace the one used across all environments. In short, this means to keep one container image per environment.
  • Process Disposability Principle — Process disposability states that containerized applications are temporary and need to be intentionally designed to be quickly disposed of when their life cycle comes to an end.
  • Self-containment Principle — Related to the Single Concern Principle, the Self-containment Principle dictates that containers must have all the software it needs, and no more, to run at build time. This is easily accomplished by including those necessary drivers in the container image.
  • Runtime Confinement Principle — Related to the High Observability Principle, Runtime Confinement tells developers to have their containers inform the platform of their resource requirements. This is because it is the platform that manages resources, and it does this through the monitoring of container runtime profiles.

What is cloud-native application architecture?

Cloud-native application architecture encompasses several technologies that have shifted how engineers address the development process, application architecture, deployment & packaging, and application infrastructure. It is the coordination of these technologies and how apps use their services that define the cloud-native architecture category. Several of those enabling technologies include:

  • Containers — Containers are akin to virtual machines, but are much smaller, and intended to be created and destroyed frequently as needed. Contrastingly, VMs must contain the underlying operating system, while containers, which silo applications, need only the application components. They consume fewer resources and cloud storage and are useful mechanisms that feature isolation, portability, and easy deployment.
  • Microservices — Rather than include every code feature within a software package, the microservices approach minimizes services into several smaller services that can be coupled together to complete larger tasks. Microservices are independent and can be deployed, upgraded, and scaled on their own as needed. Microservice thinking intends to deliver just the core functionality to developers that they need and keep the overall application light.
  • APIs — Microservices cannot be talked about without Application Programming Interfaces. APIs standardize the communication between microservices with protocols like Representational State Transfer (REST). APIs are essential communications between apps in the cloud paradigm of today which calls for isolating services and apps.
  • Automation — Automating mundane tasks, or even critical tasks that are time-sensitive is another game-changing feature of cloud-native architectures. While automation has private uses, it has undeniably made cloud scaling possible. For example, without automation, servers could not automatically increase bandwidth or add compute resources to deal with an overflow of traffic.

Cloud-native benefits

Cloud-native applications inherit many of their benefits from the flexibility architecture underlying the cloud that abstracts away many challenges inherent in older technology architectures. Some of those benefits include:

  • Application Portability — Cloud-native apps are meant to be run in containers for the express purpose of ramping up and down as demand changes. But also because these apps are built to run in containers, they are vendor agnostic. Porting your applications to another cloud provider is usually easy and convenient.
  • Application Visibility — The isolated characteristic of microservices makes them easier to analyze, debug, update, and work with altogether. And since microservices are lightweight, the resulting interactions between other microservices are easier to understand, as compared with heavier code that becomes opaque and dense with dependencies.
  • Automation Management — Automation has become integral in more of the software development lifecycle. With CI/CD approaches, automation is a must, allowing developers to perform many repetitive tasks quickly, like testing code before deployment.
  • Cost-effectiveness — CSPs have made their services very accessible from a cost perspective for small businesses to enterprises. Because infrastructure resources can be divided up on-demand, cloud providers can effectively provision compute and cloud storage capacity à la carte and offer subscription services with predictable increases in cost as more resources are used. For business, this makes forecasting IT expenses manageable.
  • Infrastructure Reliability — By design, cloud systems are built to be reliable, and resilient. Resource redundancy is built into cloud services deployed over thousands of servers, and multiple physical locations.

Cloud-native applications vs. traditional applications

Aspect

Cloud-Native Applications

Traditional Applications

Management

Subscription services help predict IT expenses with exceptional accuracy. IT resources are managed by the CSP, freeing up a company’s staff.

Complete responsibility for infrastructure and application, preventing downtime and ensuring security is owned by the company, but granting them full customizability.

Operating System

Cloud providers handle all operating system maintenance, users just need to choose the right environments for their business.

Maintaining the OS environment is the responsibility of the enterprise, however, this grants the opportunity to build tight dependencies between applications, the OS, and hardware.

Resource Capacity

Resource capacity is dynamically and automatically allocated as demand fluctuates.

Resource capacity must be manually added if needed, so many teams opt to add more capacity than required to hopefully insure against overloads. The idle capacity is now a cost center.

DevOps

DevOps and CI/CD approaches give cloud native app developers collaborative capabilities, and immediate controls to fix or update apps. The ability to rapidly deploy software and fixes allows enterprises the agility to push their business goals.

 

Developing for on-premises enterprise infrastructure can mean that the organization is reliant on aging development approaches not suited for cloud development, like in waterfall development. This style forces teams to release large updates periodically, potentially suffering delays, missed opportunities.

Architecture

With the advent of APIs and microservices, more than a new architecture that supports cloud native apps better and faster has emerged. Independent microservices have also shaped how development teams work. Separate teams can be tasked with a particular microservice. And that service can be independently maintained.

Enterprise architectures are characterized as monolithic because they include everything. These bulky and hard-to-manage apps present many challenges: dependency issues, data silos, security, incompatibility with future technologies, etc.

Scalability

Cloud architectures aim to automate all of their systems and are ideal for automatically scaling workloads. For many, this is the chief value proposition.

Typically these solutions do not feature automated scaling. Home-grown automation solutions can cause more harm than good if developed poorly.

Disaster Recovery

Containers allow for rapid recovery because of their dynamic nature. If they need to be moved, scaled, or restarted, they can be quick.

Enterprises run their own systems, and so they may not adhere to a container architecture, but rather use virtual machines. VMs require much more overhead than containers, and so cannot be spun up as quickly. However, VMs are much preferred to a system reset.

How to build cloud-native applications?

DevOps provides many of the founding principles and best practices for building cloud-native apps. These include:

  • Agile Project Management — Agile project management is flexible and iterative to designing and developing software. Agile dovetails with cloud technologies nicely, as the development style supports rapid deployments enabled by the cloud.
  • Continuous Integration/Continuous Delivery (CI/CD) — CI/CD is a deployment approach that uses automation to quickly test and deploy new apps, updates, and patches. Continuous capabilities have reduced deployment time from weeks to hours.
  • Monitor DevOps Pipeline — Automation is a game-changer, but to prevent errors within the automated pipeline there must be someone to oversee it. A human overseer will always be a valuable best practice.
  • Observability — Observability describes the movement towards greater application situational awareness. By compiling logs, traces, and metrics, teams can determine how their application is running, and make predictions about how it will function in other systems.
  • Continuous Feedback — The theme of CI/CD is creating a feedback mechanism to draw continuous development inspiration. It’s a truth that cloud-native applications are always undergoing fine-tuning. By accepting this cycle and listening to feedback, teams can get ahead of problems.
  • Design For Failure — Accepting that there will be failures is a tenant of Agile thinking. Cloud-native environments are designed for rapid and continuous development, which means that designing for perfection can counter-intuitively cause major delays. Rather, fail quickly, fail often, recover faster.

Cloud-native security

Security in the cloud is an important concern, especially when many continue to allude to the fact that the growth of cloud and IoT has significantly expanded the Internet, and so the threat surface for cyber-attacks and network invasions. Because of the nature of the cloud, security concerns and responsibilities have shifted. The “Fortress” model was the traditional standard security model for many enterprises—enterprise assets were protected inside the fortress behind company firewalls. Today, because enterprise traffic traverses the public Internet, security has been reconfigured, and new technologies have slowly augmented fortress strategies or replaced them altogether.

  • Identity and Access Management — IAM is a collection of policies and technologies that provide businesses the tools to manage identification and access controls for users and devices that access their resources. Closely associated with IAM practices is the Zero Trust principle, which states that all devices attempting access should by default not be trusted, and every attempt should be verified. Sometimes referred to as perimeterless security as the opposite of fortress strategy, a Zero Trust approach dislocates access and demands verification from all devices and users from anywhere even when accessing from the company’s private managed network.
  • Multi-factor Identification — Multi-factor identification verifies users against multiple identifying items to ensure the person is who they say they are. Popular methods include receiving a text message or email with a validation code that also must be given when signing into a system.
  • Intrusion Detection Systems — Intrusion detection systems can be combinations of software and appliances that can passively or actively monitor a network to discover malicious users of any policy violations. IDS’ older brother, Intrusion prevention systems (IPS), have features that take action against violations or bad users, from reporting the violations to dropping and blocking violators.
  • Infrastructure Protection — CSPs serve hundreds of tenets and can offer several safeguards to protect their infrastructure from intruders. By design, virtualization in the cloud offers compartmentalization and tremendous security benefits. While CSPs responsibly protect accounts, storage, databases, servers, and hypervisors, they also offer more advanced security features, such as automatic DDoS protection, or application firewalls to eliminate malicious web traffic. Each CSPs capabilities are different and should be aligned to your business goals.
  • Data Protection — More and more, data is becoming abundant, portable, and more valuable to businesses which is why it is an attractive target for hackers. CSP security uses traditional and modern network security features to protect their user’s data, however, cloud consumers must also be aware of the Cloud Shared Responsibility Model, which outlines the cloud consumer’s and cloud provider’s responsibility for data security. For example, for IaaS, providers are fully responsible for physical infrastructure security, and share responsibility for host infrastructure and network controls with the consumer; at the application level and above the consumer is 100% responsible for data security. SaaS providers, in contrast, are usually fully responsible for a customer’s application data.
{ "FirstName": "First Name", "LastName": "Last Name", "Email": "Business Email", "Title": "Job Title", "Company": "Company Name", "Address": "Address", "City": "City", "State":"State", "Country":"Country", "Phone": "Business Telephone", "LeadCommentsExtended": "Additional Information(optional)", "LblCustomField1": "What solution area are you wanting to discuss?", "ApplicationModern": "Application Modernization", "InfrastructureModern": "Infrastructure Modernization", "Other": "Other", "DataModern": "Data Modernization", "GlobalOption": "If you select 'Yes' below, you consent to receive commercial communications by email in relation to Hitachi Vantara's products and services.", "GlobalOptionYes": "Yes", "GlobalOptionNo": "No", "Submit": "Submit", "EmailError": "Must be valid email.", "RequiredFieldError": "This field is required." }
en