Cloud-based services have been around for more than a decade. Over this time the technology has understandably evolved, giving IT leaders new choices in how to provision and manage their operations. SaaS, IaaS, PaaS, cloud-native, containers – each has implications relative to storage, computation, and cost … and new technologies are creating new industry standards. In fact, the term “cloud-native” is often used to describe companies that have only operated their software on cloud architecture. But given the evolution of cloud architecture, being cloud-native no longer equates to an architecture that takes advantage of all cloud can offer. To know what to look for when evaluating a cloud service now, it helps to understand how cloud is evolving.
A (very) brief look at the history of cloud computing
While the concept of cloud computing easily predates it, most equate the modern era of cloud services to the launch of Amazon Web Services (AWS) Elastic Compute Cloud (EC2) in 2006. Soon after, competitive services from Google (Google Cloud Platform 2008) and Microsoft (Azure 2010) followed. There were many other options for cloud infrastructure during this time period, but AWS, Google, and Azure became the top three public cloud providers.
In the early days, migrating to any of these three cloud platforms focused more on replicating the types of architectures used on-premises or in managed-service data centers in the cloud. Most instances were deployed as single-tenant systems using virtual machines (VMs). Virtual machines are kind of like a condominium. Each resident maintains their own space and systems within one physical infrastructure (the building). Essentially, several instances emulating a computer may share the same physical hardware, while behaving like individual computers.
By 2012, software-as-a-service (SaaS) gained popularity and the concept of multi-tenancy began to gain acceptance. Going back to our housing analogy, SaaS is like staying at a hotel, where all of the services are provided. You arrive and are handed your access (room key) and you simply pay to use the hotel room. As the as-a-service model took hold, IT managers began to notice limitations in this cloud architecture.
But of those two architectures, the virtual machines (VMs) had the most glaring limitations. Because they emulated a physical computer they virtualized just the physical infrastructure, requiring key components like the operating system to be added later. Think of it as having a unique heating system for each condominium in a building. This also made scaling of isolated resources difficult because the operating system would need provisioning along with the virtual machine. The result was over-sized instances to support workloads. Enterprises were provisioned for – and paid for – peak storage and computing needs. This generated cost-inefficiencies from unused storage or compute capacity.
Technologies entered to address the imbalance: containers and serverless computing. The combination of these technologies within platform architectures is driving the next generation of cloud software to support seamless, omni-channel interaction, infinite scale, and continuous upgrades.
The move toward microservices
When containers and serverless computing launched in 2014, we moved into the next era of cloud architecture that leverages the power of on-demand resources to create performant and available applications.
Containers and microservices go together. A microservices architecture essentially structures applications as a collection of self-contained services that are distributed across a network environment. Containers enable microservices. They allow developers to package applications with their key components while leveraging the host for core capabilities like the operating system. These packages, called images, allow applications to be delivered in a much more agile and lightweight way, minimizing the size of the application, increasing speed, and reducing the waste often experienced in VM architectures. Revisiting our housing analogy, a container architecture is more like the apartment building, where central services like plumbing and heating are provided to all units, while the tenants can customize their apartment. Docker, launched in 2014, has emerged as the de facto standard in container technology.
Orchestration is essential for containers to work efficiently within a microservices architecture
Executing a microservice architecture adds complexity that a container can’t deliver on its own. To deliver with high availability, you need orchestration capabilities. Orchestration allows a platform to run containers across multiple machines, scale up and down with demand, maintain consistency across instances, distribute load between containers, and provide redundancy. As this need for orchestration grew, a number of orchestration layers emerged, including: Ansible, Kubernetes, Docker Swarm, Fargate, Mesos Mesosphere, and more.
Kubernetes is currently recognized as the leading orchestration system. Initially developed by Google but, like Docker, now open source, Kubernetes integrates with the Docker engine to coordinate the scheduling and execution of Docker containers. The Docker engine runs the container image, while Kubernetes handles service-discovery, load balancing, and network policies. Helm charts, a collection of files that describe a related set of Kubernetes resources, allow IT managers to define, install, and upgrade complex Kubernetes applications.
By leveraging containers, orchestration, and charting, IT managers can develop applications as service components and manage these applications independently. This is different than service-oriented architectures because each service leverages its own application services and API gateway. This is also what enables independence and allows development teams to implement DevOps best practices.
Serverless Computing
Serverless computing isn’t really serverless. It’s best described as “on-demand compute capacity.” It’s referred to as “serverless” because a trigger is coded to launch computing capacity, rather than having the process of provisioning a server. Sometimes this capability is packaged as “function-as-a-service” (FaaS). The technology is used for a lot of real-time type activities or analysis points where there are spikes of high activity and then long periods of inactive compute. This model allows for the application to scale continuously based on the application trigger and eliminates the need to manage servers. All the major cloud providers are offering a flavor of serverless technology today and the technologies continue to expand.
Pega Cloud Services
Build quickly and securely. Leave the operations to us.
Learn more
Cloud Choice Matters: A Guide to Selecting the Right Cloud
When it comes to accelerating the value of cloud computing for the modern enterprise, the market is articulating a need for flexibility that goes beyond cost.
Because cloud-based services continue to evolve, Pega’s Cloud Choice Guarantee™ means that enterprises aren’t locked into one cloud technology.
Pega constantly evaluates technology and architecture options to make sure we’re providing long-term supportability of the Pega Infinity™ suite of software.
Pega Cloud® Services delivers Pega Infinity as a comprehensive service, taking on the responsibility of delivering an architecture that is both scalable and secure, and managing the big transitions so enterprises don’t have to. This eliminates the need for client IT departments to build out and manage their cloud infrastructure and instead frees up resources to focus on building mission-critical applications. Additionally, client-managed instances of Pega Infinity are designed for Kubernetes and Helm, so that clients who deploy Pega Infinity in public, private, or hybrid clouds are leveraging a fully-containerized version of the platform. Pega has long supported and continues to support the different flavors of Kubernetes for clients running their Pega apps on Pega Cloud.
As the technology for cloud services continues to evolve, Pega will continuously evaluate and implement the architectures that best support our clients’ demand for a fully managed, modern cloud architecture that is secure and scalable, because we understand the importance of flexibility in managing mission-critical enterprise systems
Learn More:
- Read our press release on the different ways Pega Cloud supports Kubernetes from leading cloud providers.
- Learn how Pega’s Cloud Choice Guarantee gives organizations the flexibility needed to run, customize, and scale enterprise applications.
- Download our data sheet on Pega’s end-to-end cloud services.
- See how Comerica Bank is building and deploying applications in the cloud.