IDG Contributor Network: SaaS economy in the age of containers
The killer feature of any SaaS business is the massive reduction of operational cost and complexity including setup, configuration, ongoing maintenance and upgrades. Given the runaway success of SaaS in recent decades, we can clearly see that using a SaaS model instead of traditional shrink-wrapped software has made perfect financial sense for many customers. This in line with a theory that says that is a constant level of complexity in a system at any given time—IT organizations can deal with this by investing in-house (investing in a team that handles complexity), or outsourcing to a partner or SaaS/PaaS/IaaS vendor (exchanging money for complexity). When one combines the latter with an OpEx vs. CapEx financial model, easy installation/setup, and flexible pay-as-you-grow options—it becomes be very difficult to justify any other delivery model for generic software.
SaaS businesses, on the other hand, make this model profitable by distributing the operational running costs between many customers using (almost always) a subscription-based model. While legacy software delivery models share the same cost-spreading principle on the R&D level, they lack the ability to arbitrage the operational cost at the delivery level: it is costly to deliver a secure, highly available and continuously upgraded software at scale—and it takes a skilled team of developers and operators to deliver software within the customer-defined SLA boundaries, over time.
Since a large portion of the cost of SaaS development and delivery goes into building the robust and secure infrastructure needed to host the service, vendors make money by building and breaking up a large, robust infrastructure into smaller pieces of the same quality, and sells those pieces to many customers. SaaS infrastructure usually consists of many components—from databases to load balancers—each configured specifically to deliver the service in a particular way, with component-level high-availability (HA), redundancy and security requirements. Think of a typical CRM SaaS: you will need a multi-zone replicated database server, a group of load-balanced and securely firewalled frontend servers, and a cluster of servers to take care of background jobs and housekeeping of the system.
As an example, to keep the details of your 2,000 customers, you will need about 12 servers, two load balancers and several gigabytes of storage; on top of that, add the Ops team cost of maintaining those databases and servers—all of this probably represents a cost of $20k per month just to get going. To make things worse, even with this investment, you are not going to get anywhere near the five-9’s (99.999%) uptime that a SaaS vendor is going to give you at a fraction of the price. In this scenario it makes perfect sense to sign up for a SaaS alternative, paying $2,000 per month for a service that’s always up, upgraded and backed up.
This, however, might change
To see why, it’s worth understanding why running a highly available, secure and robust infrastructure is so expensive. When it comes to infrastructure, “the chain is as strong as its weakest link”. High availability and security cannot be achieved by only making parts of the system highly available and secure—this needs to be done across each and every component, which adds to the cost and complexity, increasing the bill even further.
Now, consider if all those requirements were built into a generic, self-healing, hyper-scale infrastructure, so any application running on top of it was inherently highly available, redundant and secure. This is the promise of containers. Instead of spending time on each service being delivered at a high SLA, the infrastructure takes care of this at the lower level, and provides these attributes as a service to the user. By doing this, containers take away one of the biggest benefits of the SaaS delivery model: infrastructure arbitrage, which I defined earlier in the post.
Container-based infrastructure systems like Kubernetes allow companies of any size to build their own custom, highly-available and robust infrastructure on top of private data centers or public clouds, at a high granularity and flexibility, without compromising much in return. In this new world of container-based infrastructure, IT teams spend their time on building and maintaining a few Kubernetes clusters, while external vendors and in-house developers use those clusters to provide services to their clients.
It will probably take years to get to a point where this shift affects the SaaS industry at a significant level. However, if we look carefully, we can already see savvy IT teams who are looking to bring on this future: building pipelines for their code, as well as application management stacks that unlock the automation of containerised infrastructure, on both public and private cloud.
The SaaS delivery model still has a lot of great things going for it—for one, it is now the dominant model for consuming software, wherever it sits or however it is procured. However, infrastructure arbitrage is not going to be one of its key advantages for long. While cloud computing was the killer application for virtualization, changing the economy of SaaS might be the killer application for containerization.
This article is published as part of the IDG Contributor Network. Want to Join?
Source: InfoWorld – Cloud Computing
Recent Comments