[MSFT – Microsoft] Death Star, Reformed
There’s a good reason why Microsoft was dubbed the Death Star by many fearful detractors during the on-premise era: its OS/application stack, with 90%+ desktop share, was an impermeable force that whipped the surrounding computing galaxy into submission. Given Window’s ubiquity, value added resellers and systems integrators optimized their resources and relationships by supporting Windows applications. Third party developers had little choice but to create applications compatible with Windows because Windows was standardized on PCs, the primary node through which consumers accessed applications. Windows was the ultimate platform before that word became bastardized…it just happened to be the wrong platform for a world that was shifting to mobile and cloud! But a fundamental change in how applications are distributed and consumed – the transition from on-premise, device-specific licenses to user-specific cloud-hosted subscriptions accessible on any device – have reoriented the competitive landscape.
Azure was released in early 2011 during Ballmer’s tenure as CEO under the name “Windows Azure”, which tells you something about where the company’s priorities were at at the time. But Microsoft’s OS agnostic orientation really started with current CEO Satya Nadella, who in January 2011 took over the Servers and Tools Business from whence Azure sprung and, against great resistance from a team that was winning with on-premise server products, reset the group’s priorities. In fact, if you view some of the “Azure Fridays” videos that Microsoft uploads to YouTube, you’ll see that some of the demos are done on a Macbook. [“OS agnostic” is a bit misleading as the company has recently started re-bundling Office/EMS with Windows (Microsoft 365), but what I mean is that the customer can choose this bundle or not. Microsoft’s applications are no longer predominantly captive to Windows].
[The tech nobility – not just software incumbents like Microsoft, Oracle, and Salesforce.com but behemoths with major infrastructure and machine learning chops like Amazon and Google – play on multiple layers of the enterprise computing stack (mission critical applications, developer tools, storage, processing), but spread out from somewhat different origins. AWS extended its reach from IaaS to PaaS, Microsoft from PaaS/SaaS to IaaS, Salesforce from SaaS to PaaS. The distinctions between some of these layers are sort of fuzzy but still useful to hold in your mind. If you aren’t familiar with those acronyms you might find this post helpful].
An enterprise computing moat is no longer defined by forced lock-in to proprietary platforms. Windows OS has increasingly become a sideshow; 1/3+ of Azure’s virtual machines are Linux-based; a growing amount of app development is open-source; nearly 70% of O365’s ~150mn consumer and enterprise subs have the application deployed on Android and iOS; Active Directory, Microsoft’s user account management and credentialing platform, can be connected to AWS and Google Cloud resources. Supplanting the closed model is a cat’s cradle of portable applications touching several disparate hosting and development environments. Rather than dominate distribution channels, suffocate competitors, and extract surplus from reluctantly captive customers, the new way to moat is offering fertile soil for others to grow, splaying the indispensable infrastructure upon which other enterprises land, build, and dispense their core applications. Any semblance of “lock-in” comes from delighting demand (enterprises and developers) with useful tools and abstractions, not controlling supply (integrators and OEMs) with heavy-handed incumbency. I’ve touched on the “infrastructure” theme before in a prior post in the context of the payments space:
“V’s open partnership model is a major change. VisaNet, as sophisticated and broad as it is, operated as a tightly restricted network with unidirectional flows (pulling money from one account and moving it to another) and limited functionality for those accessing it for most of its life. But now Visa has modularized its network (management actually began talking about this as early as 2010), decomposing VisaNet into its core building blocks, providing a set of APIs (over 60 web services/APIs available through a developer portal that launched in 2016) that financial institutions, platform operators, and payment enablers can use to interface with Visa’s network and services to create a wide variety of interoperable solutions……Visa and Mastercard are entrenching themselves as the very first layer of disintermediation, the basic and indispensable infrastructure underlying the entire payments ecosystem, whatever the country, whatever the medium. And it seems increasingly the case that various payment providers will just plug into V/MA’s secure pipes while they themselves work on improving the customer experience.”
I similarly view Azure, GCP, and AWS as utilities underpinning all sorts of unpredictably cool shit that others will build. But taking on this role also presents the challenge of how to claim value when no one is really forced to use your toolkit. An extension of disruption theory (as expounded upon in Christensen’s The Innovator’s Solution) is helpful here: integrated architectures, made up of tightly unified proprietary interdependent components are best suited to optimize performance and tend to dominate until a product becomes functionally “good enough” to meet the market’s needs, at which point other vectors of competition – convenience, flexibility, speed to market – become more important and call for standardized, independent pieces that comprise a modular architecture. So, in the context of computing, consider (broadly) the trek from mainframes [chips, hardware, operating system, and applications all fused together] to Wintel [hardware and chips modularized but OS and applications integrated] to enterprise cloud [hardware, chips, OS, and applications all modularized]
Venture capitalist Bill Gurley and tech writer Ben Thompson have also both written about the benefits of hardware/software integration, here and here.
And increasingly, even the applications themselves are diaggregated into independent functions that communicate through APIs. In fact, entire multi-billion dollar companies like Stripe, SendGrid, and Twilio have been built around functions – payment processing, email, push alerts – that are shared across many different applications. Disaggregation is taking hold on the development side too. Containers let you pack everything you need to run an application – code, dependencies, libraries – into, um, “containers” that can be moved to and quickly deployed on various computing environments. They have become increasingly popular because they enable microservices architectures, wherein a single, unwieldy “monolithic” application is broken down into smaller containers, each container independently executing a specific process – delivering a page, storing messages, accepting orders – as part of delivering a seamless application user interface. You can develop, debug, and test a single containerized application without effecting all the other containers and do it all much more quickly and agilely than you could in a traditional virtualization scheme, which matters considering the hastening rate of innovation and change in application development today. And you can do this all with fewer computing resources than required under a traditional virtualization apparatus, which, unlike containers, still tethers the application code to a specific operating system [and so, under virtualization, if you run multiple applications, you will have multiple “guest” operating systems that each have to boot before communicating with a single “host” operating system, which is burdensome compared to having multiple “containerized” applications communicating with a single host OS].
But in an increasingly modularized computing environment, how is stickiness got? Well, you can introduce higher level abstraction. Okta is trying to claim the critical point of integration across cloud computing by owning user identities and rendering them portable across cloud stacks and across applications on those cloud stacks. With respect to containers, AWS tried exploiting its 7-year head start in enterprise cloud to attempt infrastructure lock-in by providing a container orchestration service that made it easy to manage and scale microservices on AWS and AWS only. That is, until Google – in an effort to draw customers onto its own cloud and to presumably preempt AWS lock-in – open-sourced Kubernetes in 2014. Kubernetes enables microservices – the containers holding the application plus the necessary adjoining feature sets like autoscaling, load balancing, and database discovery – across all the major public clouds and on-premise by translating into a common language all the different ways these computing environments implement microservices [known as Borg before being open sourced, Kubernetes orchestrated the microservices underlying Gmail, search, and maps]. It also displays in real time all the applications being run and the computing resources each application is consuming across computing environments.
Setting up and running Kubernetes was super easy on Google Cloud but onerous on Azure and AWS until just recently, when Amazon and Microsoft, witnessing Kubernetes’ torrid pace of adoption, grudgingly (I assume) joined the bandwagon. They are now dedicating resources that make it far easier than before to set up, manage, and operate Kubernetes environments. So now that Kubernetes is deployed with increasingly comparable ease across Google Cloud, AWS, Azure, on-premise, wherever, no single cloud vendor has lock-in on microservices-based application development.