(~40 minute read)
—
When most people hear the word “innovation”, they are likely to think about a cutting-edge product (such as the latest Apple device) or a company in the technology space (like OpenAI). While innovation isn’t exclusive to the technology sector, all technology businesses are fundamentally innovation businesses. Etymologically, the meaning of the word is “a better way of doing” and technology companies come into existence because they give birth to a better way of solving a meaningful problem – that enough people care about – than what status-quo offered. Statistically, the approach of most first-time innovators will eventually be matched and bettered by someone else at which point they will fade in relevance and perhaps cease to exist. A small percentage will continue to reinvent themselves and go on to an Act 2 and 3 and perhaps build a durable franchise. There is however one very specific type of exogenous phenomenon that supercharges this evolutionary cycle of innovation. It’s the mainstreaming of a new platform or what is commonly referred to as a “platform shift” and what it represents is the equivalent to a geological tectonic shift.
In the industrial manufacturing sector, the majority of leading companies are not vertically integrated. West Elm parent Williams Sonoma doesn’t own forests, Nike or Macy’s aren’t in the business of growing cotton, Boeing doesn’t fabricate the steel used on its airplanes and Toyota doesn’t produce the rubber for the tires of its vehicles. Managing these supply chains is incredibly important for these companies, however, they rely on partners for the raw materials. Their focus instead is on operating with excellence at scale in their “core competencies” which is what led them to establish their market-leading competitive positions.
Software companies, like their peers in the industrial manufacturing space, are not vertically integrated. They are built on technology platforms. Technology platforms can loosely be defined as the underlying set of technologies that serve as a base upon which other applications and technologies are developed. What’s different about software companies versus their manufacturing peers, is that the half-life of their value chain dependencies (i.e. the platform they are built on), is orders of magnitude shorter. Boeing doesn’t realistically have to worry about steel and aluminum used in its planes getting replaced by newer materials. Similarly, Nike doesn’t have to worry about cotton and other core fabrics getting displaced by different materials. Even if that were to somehow happen, the structure and balance of power of these industries makes it such that these manufacturing leaders will very likely be able to exert their influence to gain access to the new value-chain components. That is not the case in the technology sector and particularly not so in the software space.
Platform changes represent tectonic shifts for software and other technology companies. It’s when the structure of a category gets upended and incumbents become vulnerable because the bedrock components of a product’s advantage now threaten to become liabilities. It’s not just the technology architecture that becomes an issue, it’s often the entire business around it that risks becoming an albatross. It’s actually relatively rare for a scaled software leader to be competitively displaced by an upstart based on head-on competition in feature functionality on the same platform. The primary reason incumbent leaders lose their lead tends to be because of platform shifts. The rise of Google’s G-Suite to an estimated $10B+ in revenue despite Microsoft’s dominance in productivity software, the rise of Salesforce to a $250B+ market cap despite SAP and Oracle’s $300B+ combined value as incumbents, the rise of Snowflake to $80B+ in market cap despite the incumbent leadership of Teradata and Vertica, the rise of Uber to a $140B+ in market cap in the presence of a $24B per year US taxi industry and Adobe’s $20B acquisition offer for Figma despite their three-decade lead and $10B+ run-rate across their creative suite are all prime examples of the profound potential impact of platform shifts. Platform shifts typically represent such a large opportunity for a reshuffling of the chessboard that their mainstream acceptance by investors and other industry participants is also what tends to fuel extreme exuberance and market bubbles such as the dotcom bubble or what we’re seeing with AI today.
Platform shifts are not always obvious ahead of time. There can be a lot of type 1 errors involved in trying to identify and capitalize on them (see blockchain, metaverse etc. ). Also, the timeframe, customer segment and ‘business flank’ via which they are likely to impact an incumbent is incredibly difficult to forecast precisely. For technology leaders, recognizing and acting on a true platform shift is a foundational responsibility of the CEO and senior leadership team. One of my favorite Satya Nadella quotes is the following: “…To be able to see these secular trends long before they become conventional wisdom, […] and change your business model, change your technology and change your product is the core challenge of business leaders. In tech it’s unforgiving but now that every company is exposed to these forces, we all have to deal with it”
Note, the concept of platform shift as referenced in this article has some parallels and overlap with Clay Christenson’s definition of ‘disruptive technology’ in his seminal ‘innovators dilemma’, however, it’s a distinct concept that is more relevant to a sector and horizontal set of technologies versus any particular incumbent.
When I joined Alteryx in mid-2018, the company was about $160M in ARR and it was one of the fastest growers in the data and analytics space. The primary deployment model for the company’s products was on desktop and either on-premise or as a self-managed cloud deployment (i.e. VPC). While the company’s products could easily connect to all kinds of data in the cloud, we did not offer a hosted, multi-tenant deployment of the platform at the time despite how mainstream and prevalent hosted cloud deployment had become by 2018. I will avoid going into all the nuances of the analytic platform market, but in summary, it helps to understand that the deployment dynamics of software in this category are very different than other types of infrastructure and application software. G2K/Enterprise customers have generally been much more hesitant in adopting hosted cloud analytics products largely due to the cross-cutting nature of data access required. It’s very different than adopting say a marketing automation app which is limited to generating and persisting lead and prospect data with a third-party vendor in their cloud and syncing that data with a CRM. Signing up for a multi-tenant hosted analytics platform means that an enterprise would have to hand over pretty much every one of its data crown jewels (financial, operational, employee etc.) to an analytics vendor since the more important the data, the more critical it is for the enterprise to ensure it’s getting analyzed which is the purpose of an analytic platform. You can see why this would be problematic. Additionally, Alteryx’s GTM model was built around a value prop tuned to a non-hosted product. This isn’t a well-understood fact (especially by investors) but NOT being cloud-hosted is precisely why Alteryx reached a billion in ARR and beat its competitors. In fact, back when Dean and co were raising their Series A in 2013, they were turned down by several top-tier Sand Hill VCs who opted instead to back ‘cloud native’ analytic platform competitors. None of those competitors succeeded because as Tableau and Alteryx demonstrated over the next 10 years, the optimal GTM model for the analytic platform space ended up being a bottom-up product-led/product-assisted motion targeted to the line of business end-users with a < 45-day sales cycle and <$15K ACV followed by rapid expansion cycles. This land-and-expand motion (built around quick time-to-value and industry-leading net-retention rates in the high 130s) was fundamentally not possible with a cloud-hosted product as enterprise infosec clearances alone would typically take more than 45 days and no sane analyst would be willing to risk uploading highly sensitive financial or other proprietary data to a vendor’s cloud without have been given clearance by IT to do so even for a simple proof-of-concept demo. Take that away and you’re back to a long, expensive top-down selling motion that necessitates a high ADS, much higher upfront capital requirements to fuel Sales & Marketing and generally unattractive unit economics, particularly at the early stages.
Now all of this said, the senior leadership team absolutely recognized the importance of hosted cloud deployment to the future of the company and how it would be a critical pillar of a hybrid deployment strategy going forward. Analytic platform deployment models are highly dependent on data gravity and as data gravity steadily moved to the cloud, it was crystal clear that the compute would have to follow. Hosted cloud represented a tectonic platform shift. There was debate on some of the details, i.e. timing, segment level sequencing, demand mix, architecture, etc., however, there was consensus that the next gen cloud-hosted platform was one of the most critical strategic efforts at the company which if we got wrong, we would risk losing our leadership and relevance in the category.
Over the next several years, we spent an inordinate amount of capital, strategic planning, product and R&D cycles on finetuning our cloud strategy and executing against it. As a strategic planning leader, I personally spent hundreds of hours researching and studying how different software and technology companies had approached and successfully navigated not just a technology but an overarching business transition as a result of a re-platforming. We had a number of stops and starts along the way and a number of lessons we learned during the course of this multi-year exercise. That said, the experience of studying how to navigate platform shifts over the last 5 years has led me to arrive at a number of foundational insights and conclusions on this topic. More specifically, I’ve concluded that there are 3 fundamental ways in which a company can approach a product re-platforming exercise each with its own distinct implications on time-to-market, core value prop and business model. It’s important to recognize the differences across these approaches and optimally organize around a re-platforming decision in a deliberate and coherent manner when approaching this critical strategic choice.
This framework is displayed visually below
The different approaches entail a set of choices around product capabilities (i.e. feature functionality) and the associated commercial model. The three primary approaches on product capabilities include
- Fully over-lapping feature parity between the products on the new vs. old platform
- A subset of critical feature-functionality offered on the new platform
- A subset of original feature functionality plus some net-new core feature functionality offered on the new platform.
The choice space on the commercial model side spans having the same model construct which can then translate into equivalent, higher or lower price points. Alternatively, it can entail an entirely different construct (e.g. moving from perpetual to subscription or from subscription to a consumption model). Lastly, the license to the re-platformed offering can alternatively be part of a single multi-platform entitlement. The choice of optimal model will be a function of relative competitive differentiation and the demand profile of a company’s product offering, however, a software vendor can choose how to approach this decision as part of a coherent and integrated set of design choices underpinning their platform strategy
Influencing these design choices are three principal, first-order criteria. The first of these is the relative product sophistication enabled by the old vs. new platform, whether that’s the same or different and how much of the underlying architecture (e.g. APIs) is reusable across the two platforms. The rapid rise of mobile between 2007 and 2020 implied a platform transition from desktop/web to mobile for many software category incumbents. This represented a shift from a more feature-rich to a less feature-rich platform in return for a core benefit of mobility and a greater number of distribution endpoints. For web or desktop products built on a services architecture, this was not a huge incremental lift but for monolithic architectures (i.e. the majority at the time) this represented a material engineering undertaking. The transition from desktop to web or from Client/Server to SaaS between 2005 and now represented an transition from one sophisticated feature-rich platform to another one (especially once core javascript technology in modern browsers evolved to the point of enabling close to native app experiences). The relatively distinct technical stacks for desktop vs. web apps coupled with the feature depth of advanced and mature native desktop apps made this type of re-platforming a massive undertaking.
The second criteria is the variable cost-to-serve dynamics of the old vs. new platform. One-time costs matter as well but tend to generally influence the binary decision or whether or not to compete on the new platform. Changes in variable cost structures on the other hand have a direct impact on the monetization models that are feasible on the old vs. new platform. If they are materially different, then that warrants a rethinking of the commercial model. Let’s consider a few examples. Desktop software is essentially a zero marginal cost offering. On the surface, it might appear that the same in true for mobile software as well but in reality, there is a hefty 30% rev-share that needs to be handed over to the mobile app platforms for distribution. There are ways around that (as Amazon has done with their iOS apps) but that depends on the nature and structure of the app. Similarly, client-server is a zero marginal cost model for software vendors (SAP doesn’t incur any material direct costs when you run S3 ERP on-prem), however, the alternate SaaS model results in a hefty variable cost incurred by such vendors (e.g. Salesforce) to distribute their product. This can often add 10-20 pts to COGS which all else equal, would materially erode operating margins if delivered with the same recurring price point as a client-server delivery model.
The last major consideration is the target customer and market for the product on the new platform. This can be the same customer as before (making it a replacement offering) or a different customer (making it an incremental and accretive offering). If it is truly a different customer (which is quite rare), then one can think of the new platform as being additive and able to co-exist allowing a significant increase in the degrees of freedom that a software vendor has in designing the offering. Conversely, if the re-platformed product is targeted to the same customer as a substitute, replacement, or alternate offering, then the platform transition dynamics represent those of a customer migration and include much fewer degrees of model design freedom as a greater number of legacy considerations have to be factored into the new platform offering for practical purposes. There is also less flexibility in the design of the commercial model. This latter phenomenon is precisely why upstarts built natively on the new platform have a structural competitive advantage over incumbents as they aren’t encumbered by the complexities of having an installed base operating on a legacy model. Intellectual honesty on ICP for the new platform offering (and whether it’s an additive or a replacement product) is crucial given the downstream implications of this assumption.
It’s important to recognize that the decision considerations on the right of the framework are mostly exogenous factors. The first two decision considerations (relative platform complexity and relative costs-to-serve) are ‘hard market realities’ that a company unfortunately does not control. The fact that there are 2 billion+ smartphone devices worldwide and mobile devices offer a “less sophisticated” platform at least from a compute and arguably UX standpoint vs. desktop or the web is a market reality that every enterprise has to deal with. How an enterprise chooses to approach their R&D choices across the portfolio of platforms available has consequences. For example, the lack of priority given to mobile native experiences by the major discount trading platforms (E-Trade, Schwab etc.) is what allowed Robinhood to carve a massive market opportunity and seize a material amount of market share by prioritizing a mobile-first architecture & UX coupled with a disruptive business model. The third consideration, ‘target customer’, is a ‘soft market reality’. What that means is an organization has some control over how to segment its offering portfolio and position certain products to different segments across various platforms. There is however a certain amount of ‘market pull’ that cannot be resisted here. Take for example SAP. The company might claim that S/4HANA on-premise is targeted to large banks and manufacturers while S/4HANA cloud is targeted to more modern cloud-native and mid-size businesses and operationally organize the field org around this segmentation. This could very well represent the majority of customers in these segments for them, however, there will surely be some large banks or manufacturers that want to move to a pure cloud architecture and not maintain on-prem ERP servers. If SAP wants to serve these customers with its latest in-memory database, it has no choice but to give them the hosted offering under a compelling deal structure.
So how does one go about navigating the choice space to land on the right approach? The figure below is helpful in navigating the intersection of various considerations with relevant examples of companies that have successfully done so.
The idea behind this model is to understand the easiest way to feasibly navigate a platform transition based on certain technological and market realities. On the left is the Platform sophistication vector representing the relative sophistication of the old vs new platform. There are three relevant archetypes here. A transition from a less sophisticated to more sophisticated platform (e.g. from mobile native to desktop native app), a transition from a platform with higher sophistication to one that is less so (e.g. from desktop to mobile), and lastly a transition from a highly complex to another highly complex platform. On the top are two strategic scenarios represented as columns: the first being one where the target customers for the re-platformed offering are different (additive) and the second scenario where they are the same as that for the old platform (making it a replacement offering).
The first scenario (row) represents a transition from a less complex platform to one that offers greater sophistication. Less sophisticated platforms tend to trade off some advanced capabilities instead of convenience, reach or cost. For example, an app designed to operate on an edge device will be less sophisticated than one that has direct access to workstation / cloud compute and edge platforms APIs are therefore less capable than cloud-connected APIs. Apps that were built on a mobile native basis that wanted to eventually expand to desktop or the web are good examples of a cohort to study that had to navigate this transition. By virtue of being designed natively on a lower capability platform means that these apps are not as difficult to port in their entirety to another platform that offers more sophistication, especially if they are designed on a decoupled services-based architecture which most mobile apps are indeed built on. In terms of platform transition strategies, approach 2 (porting a subset of functionality) doesn’t make sense as it doesn’t offer a huge time/opportunity savings and approach 3 (porting a subset of functionality with net-new capabilities), while it could be adopted is also not the most sensical as it makes more sense for the original capability (which has proven product-market-fit) to be fully replicated given the relative ease of doing so before pursuing adjacent unproven capabilities. This leaves us with Approach 1 (full port with some potential net-new capabilities) as the optimal platform transition strategy which is exactly the approach that was taken by several successful incumbents in this situation regardless of whether they were targeting the same of different customers. Take for example Robinhood. Though initially available only via mobile, once it had crossed 5M MAUs, Robinhood decided to offer a web experience as well to its existing users and presumably to potential customers who were using alternate web-based discount platforms and preferred a larger browser-based user experience. Robinhood’s web functionality was exactly the same as the mobile functionality and they chose not to go any further on web platform features despite requests by some customers to offer richer charting and analysis experiences on the web. They used the same underlying APIs to power the backend and the business model for the web remained the same as that for mobile (no trading commission with a focus instead on payment for order flow).
Another example of an app that was ported from mobile to web and native desktop is WhatsApp. Designed initially to be very much a mobile communication service, it has evolved to become the most important direct communication platform in the world and is increasingly moving into a business-focused value prop for monetization. In fact, the number of Whatsapp messages sent daily exceeds the entirety of all text messages sent globally across all countries by a factor greater than 10×2. Not surprisingly, WhatsApp recognized the importance of enabling cross-platform communication especially on PCs not to attract new users (it pretty much already had saturated penetration across its multi-billion dollar addressable market) but instead to give users the option of a more comfortable typing and web conferencing experience. Again, porting a subset of functionality to the web or desktop (a more sophisticated platform) wouldn’t make sense, so their strategy was to initially port the entirety of the mobile apps capability to these platforms. Unlike Robinhood though, WhatsApp didn’t stop there. It actually went further and added some features to its desktop apps before doing so on mobile where it made sense. An example of that is screen-sharing. Screen-sharing is a key feature that enables effective collaboration and is particularly useful on desktop apps. Though WhatAapp is a free-to-use app with no immediate monetization considerations, unparalleled market share and a strong network effect-driven moat, their calculus for doing so was likely based on capturing a greater share of use-case-based engagement across existing users by pursuing this feature initially on the platform where it added the most value.
The third example of companies that ported their apps from a less so to a more sophisticated platform are Uber and Lyft. As we’re all aware, these apps were built natively on mobile due to their core value-prop being a unique fit for mobility. Porting the ride-hailing experience to the browser for desktop access was not a massive undertaking for what would presumably be less frequent or edge use-cases (e.g. calling a ride while at home on your home computer). Instead of focusing on just the exact same customer and use-case as mobile, both these companies chose to first prioritize a key business adjacency with the desktop re-platforming which was enterprise use-cases. Specifically, they focused on luxury car dealerships and service centers that wanted to provide free mobility to end-users when they brought in their own cars for service. By building scheduling and other tools that allowed the business to leverage the network and issue rides on behalf of users while having the interaction for the end users be seamless on their individual accounts, the desktop experience became more of a superset of mobile functionality that fueled a TAM enhancing business for both of these companies.
In summary, as these examples have illustrated, whether the target user is the same or different, the ideal approach for re-platforming from a less complex to a more sophisticated platform is Approach 1 (at minimum a full port of all features from the prior platform) with an equivalent business model on the new platform being perfectly viable. There is often no real-time savings, cost, or strategic benefit to pursuing just a subset of the capabilities of the original platform. If anything, as Whatsapp and Uber demonstrated, there is often value in pursuing all original features plus a superset of product capabilities in the re-platformed product offering. Of all the scenarios, this type of re-platforming represents the least complex type and is most often pursued as an offensive move from a position of strength by an incumbent.
This brings us to the second major archetype of platform transition that occurs when moving a product from a more sophisticated to a less sophisticated platform. While harder than the first (moving from a lower to higher sophistication), this scenario too is not that complex and is mostly undertaken as an offensive move often from a position of strength though there are often key business model changes and implications involved. The desire to move to a less sophisticated platform by definition means that there is some potential value to be captured by sacrificing product sophistication for something else be that reach, distribution, or cost. Because the new platform doesn’t offer the full breadth of the richness of the original platform, replicating the full capability set of the original product (approach 1) is not an option, the only viable approaches are 2 and 3 with 2 being the most straightforward. I have found that its easiest to think of this type of re-platforming exercise as similar to a product versioning decision, specifically one that involves offering a ‘lite’ version of the product, just on a different platform which actually helps create a more defined fence across the products than if the ‘lite’ version was offered on the same platform as the original offering. That said, the big consideration here (as with any product versioning decision) is how much commercial upside vs. cannibalization risk exists with the lite version and that is fundamentally a price elasticity question with an additional wrinkle of cross-platform substitution willingness thrown into the mix. The key decision variable here is the target customer. Let’s first look at the ‘same’ customer. Again by definition, if we are talking about a less feature-complete product targeted to the same customer, it is by definition something that is likely to be used in conjunction with the original for specific instances of convenience vs. as a primary platform. Here, one could pursue incremental monetization, but if the variable-cost-to serve is low (or roughly equivalent to that of the primary platform) and there is generally high price sensitivity across the customer base, then it is best to pursue this type of lite offering with a single multi-platform entitlement strategy. If variable cost-to-serve was high, then incremental monetization would be warranted and might even be a necessity.
A good example here is Adobe’s foray into mobile with its creative suite of software, particularly the flagship Photoshop app. Photoshop desktop is the oldest and most advanced image processing software that commands unparalleled market and mind-share among creative professionals worldwide. Adobe recognized mobile (particularly tablets) offered some convenience benefits for quick app editing but was in no way a substitute for the flagship app and given the incredibly high market share of the original product, there wasn’t a ton of white space to pursue (particularly as Adobe has other ‘lite’ apps for less sophisticated editing). It made strategic sense then that Adobe decided to pursue a single cross-platform entitlement strategy for Photoshop mobile and positioned it as a convenient alternative for core users who were on the move and needed to access some features. The base SaaS subscription to desktop Photoshop includes access to the mobile app as well. Now let’s contrast this strategy with another competitor in the creative image editing space that is more of an upstart, Skylum software. Skylum’s flagship product is called Luminar Neo, a rich and incredibly easy-to-use desktop photo editor that heavily utilizes AI for post-processing images, allowing one to achieve in a few clicks what typically takes much longer in traditional image editors. It is monetized under both a perpetual and subscription model with the subscription being ~40% lower than Adobe’s equivalent offering (Lightroom). Being a much smaller player in the space with a very small market share, and a very limited portfolio of products meant that mobile (particularly tablets) represented a TAM-enhancing product packaging and versioning opportunity for them by potential exposure to less sophisticated users who likely were more price sensitive (assuming they could control for cross-platform cannibalization). Skylum therefore decided to rebuild its entire Luminar Neo UI for tablets to be even more user and touch-friendly and package a subset of desktop features under a separate subscription entitlement available just on tablets at roughly half the price of the desktop subscription. They clearly viewed the cannibalization risk as low and saw the revenue upside potential from net-new users under a lower ARPU as greater than the benefit of giving desktop customers a single entitlement across platforms. Desktop customers are still able to purchase a mobile subscription, it would just have to be an incremental entitlement on top of the desktop subscription. The fact that their desktop subscription prices are well below the market leaders/ likely gave them some comfort in pursuing this strategy without risking customer backlash. In summary, re-platforming a product to a less sophisticated platform than the original is best viewed as a product versioning decision. If pursuing a net-new target segment, Approach B with a lower-cost offering is viable assuming one can mitigate cannibalization. If pursuing the same customer segment with no incremental costs-to-serve and a baseline position of high market share, dual entitlements with the primary platform are likely to be the most appropriate commercial construct.
There is one very important nuance to note however for this category of platform transitions from greater to less sophisticated platforms and that is the definition of “lower” sophistication. While tempting to base this definition on a point-in-time static view, it’s really critical to apply a dynamic and longitudinal perspective to this definition. If the sophistication offered by the new platform is lower to begin with but on an upward trajectory and is likely to equal that of the original platform, then it’s best to think of this transition as not one from high to low but ‘high to high’ as detailed in the next scenario. A good example of this phenomenon is browser technology. If we roll the clock back to 2006, the web (browser) offered a much less sophisticated platform than the desktop for complex apps with no way to replicate rich desktop functionality on it. It was tempting to therefore think of web/cloud re-platforming of desktop apps as a ‘high to low’ re-platform exercise and part of a ‘lite’ product versioning strategy vs. an existential threat to the core platform architecture. Taking a longitudinal view however would have suggested that based on the trajectory of Ajax development, complex state management and ever expanding ecosystem of rich javascript libraries, browser technology was likely to catch up to desktop technology in terms of the sophistication of apps they enable. This was Google’s early insight with their Office suite and what allowed them to take Microsoft on. Similarly, this in a nutshell was also Dylan Field’s foundational insight (webGL’s enablement of desktop-level graphic rendering in the browser) that gave him the conviction to found Figma which the mighty Adobe valued at $20B just a few years later. It is helpful in fact to think of this category of platform transition using the ‘disruptive technology’ S-curve model proposed by Clay Christensen in ‘Innovators Dilemma’.
This brings us to the third and final archetype of platform transition which is a transition from one complex full-featured platform to another complex full-featured platform. This type of platform transition is exponentially more complex than the first two scenarios, is almost always pursued defensively and rarely done from a position of strength. It’s a journey that is riddled with landmines and the base-rate odds of successfully making it to the other side are not in an incumbent’s favor. This is the scenario that represents the figurative tectonic shift described at the beginning of the article and where most business train wrecks tend to occur. Canonical examples of companies that have successfully completed this archetype of platform transition at scale are Pixar (2D to 3D animation technology), Microsoft (Office desktop to Office 365), Oracle/SAP (on-prem to cloud/hybrid ERP) and Adobe (desktop to cloud/multi-platform creative apps with a simultaneous change in business model).
What makes this hard is that a complex mature application developed on a sophisticated platform has typically been iterated on for many years, likely has a significant entrenched customer/user base who know exactly what the product represents, is likely to be a financially successful product with a healthy cash flow stream and is almost guaranteed to have a non-trivial amount of technical debt underpinning its architecture. Replicating the entirely of this feature/platform functionality on another advanced platform with a completely different underpinning is a gargantuan undertaking no matter the amount of development resources directed to the effort. Waiting to replicate the full functionality of the original product before going to market with the re-platformed offering (approach 1) while tempting on paper, generally doesn’t make sense as the strategic value of faster time-to-market and customer feedback benefits of a more iterative approach almost always exceed the benefit of waiting multiple years for feature equivalence. The downside however of pursuing approach 2 (pursuing a subset of feature functionality), is that 1) you are all but guaranteed to disappoint a portion of users who expect more functionality no matter how well you communicate and message the feature limitations of the re-platformed product and perhaps try to position it as a ‘lite’ offering to a different set of customer, 2) Customers and users will justifiably expect a lower price point for the offering especially if monetized separately, creating all kinds of risks, 3) You will open yourself up to attacks from competitors built natively on the new platform who will use the opportunity to promote their objectively superior functionality on the new platform despite your potential feature/functionality advantage on the original platform. These are difficult tradeoffs and not easy to mitigate under approach 1. These challenges are particularly amplified if the core target end-user is the same on the new platform vs. existing platform.
If a software vendor has concluded that waiting for multiple years to release an equivalent featured offering on the new platform is not viable (which is often the case), then that leaves them with 3 potential strategic choices:
- Approach 2a: Release a product with a subset of original functionality at a lower effective price point if they wish to incrementally monetize the new offering. The greater the variable cost-to-serve, the more pressure there will be on incremental monetization. Cannibalization risks need to be top-of-mind for such a decision
- Approach 2b: Release a product with a subset of original functionality but make it part of a single multi-platform entitlement, perhaps with a small price lift to the standalone offering price for the offering on just the original platform. This approach limits downside cannibalization risk but also severely caps upside benefits especially if the new platform offers more customer reach
- Pursue “approach 3”, i.e., in addition to a subset of core functionality, focus on net-new, high-value adjacent features to the original offering that exploit the new platform’s unique platform capabilities and collectively create a value-proposition that commands at least a roughly equivalent willingness-to-pay for customers.
What research and my personal experience revealed is that unless the texture of the demand distribution for the original product strategically warrants a re-platformed “lite” packaging with a lower unit price, the most strategically beneficial and financially accretive approach here is option 3. It’s also by far the hardest to execute because identifying and building true high-value adjacent capabilities is not a trivial exercise and often pushes a vendor into the domain of an ecosystem partner or a competitor who has deep capabilities in that adjacency. That creates its own set of challenges.
Let’s look at a few examples of companies that successfully navigated this transition. The first is Autodesk, the creator of AutoCAD which is the de facto global standard for 2d/3d drafting and computer-aided design software. Its extremely rich and advanced desktop app for Windows and Mac has a storied 50-year history and was one of the first software titles to take advantage of advanced functionality enabled by modern GUI-based operating systems in the mid-to-late nineties. The transition to the web/cloud represented a classical complex-to-complex platform transition for AutoCAD. Here AutoDesk made the strategic calculation that despite the advancement of web APIs, the sophistication offered by these was not sufficient to meet the full advanced functionality of the desktop native design experience so building to a feature equivalent product was not possible in the near to-mid term. It did recognize the collaboration and centralized storage benefits of cloud software however and viewed this as not dissimilar to their approach for mobile (a more complex to less complex platform transition). Additionally, Autodesk recognized that despite the ubiquity of AutoCAD, there were accretive benefits to a ‘lite’, lower priced offering that could be targeted to adjacent personas such as engineering assistants, contractor crews and other and other adjacent personas in the construction ecosystem who would benefit from this access. It chose to bundle the mobile and web entitlement license that offers less functionality than the advanced desktop version, but unlike Adobe which decided to pursue a single cross-platform license for all versions of Photoshop, AutoCAD decided to monetize this incrementally at a lower unit price. Their calculus presumably showed that the cannibalization risk of having desktop users move to a less capable mobile/web entitlement was not material so approach 2a (subset of functionality with lower price) was a viable strategy for them to pursue
Another example of a vendor here is Tableau (now a Salesforce company). Tableau is a pioneer of modern self-service business intelligence software and developed innovative technology on the desktop that abstracted much of the code needed to aggregate and visualize data in a highly interactive manner. The web and cloud represented a complex to complex platform transition for them. Tableau’s cloud-hosted web offering has come a long way since its initial release 8 years ago, but despite the constant improvements and additions over the years, the cloud offering still doesn’t offer all the advanced capabilities that desktop authoring entails. When Tableau first released the cloud product 7 years ago, it was via approach 2 (offering a subset of offerings, despite their marketed claims of new features) but priced it the same as desktop. Given how deeply penetrated in the market Tableau already was with a large share position, this did not result in a materially accretive financial outcome for Tableau so it wasn’t surprising that after the company’s acquisition by Salesforce, Tableau moved to a single cross-platform entitlement for its ‘creator’ persona license that allowed a user to use web or desktop versions at the same price point despite the difference in advanced feature functionality.
Microsoft O365 – Case study
This brings us to perhaps the most interesting example of complex platform transitions in recent times, that of Microsoft Office. Microsoft’s Office suite is the most ubiquitous productivity software in the world with Excel alone estimated to have 700M+ users1.
The emergence of Google Labs spreadsheets in 2006 (based on the acquisition of XL2Web) and its rapid uptake over the next few years was the first major change in decades to the basic productivity software experience (particularly collaboration features) and made evident to industry executives where such software was headed in a browser/cloud-first world.
Microsoft understood that their business software crown jewels (Word, Excel, PowerPoint, et al) would need to be ported to the cloud for long-term success – potentially under a different business model. This represented a classic innovators dilemma challenge for them. Keep in mind these products represent best-in-class symbols for enterprise productivity tools that were ubiquitous worldwide and had been under iterative improvement for ~30 years by this point. Porting the functionality from the desktop to a new platform required a very thoughtful platform transition strategy to maintain that leadership position. The only thing that was clear to Microsoft was that “Approach 1”, i.e. waiting to release until the feature functionality of office web products equaled that of the desktop was out of the question. Despite Microsoft’s resources, such an approach would take years (perhaps decades) to bring the products to market and allow competitors like Google to strengthen their beachhead with cloud-native productivity suite offerings. The head of Microsoft’s cloud transition explained their thinking in the early days to me in the following manner. The framework they used to think about office cloud vs. desktop was something called ‘value vectors’ which was inspired by the value-vectors framework in Mauborgne’s popular ‘Blue ocean strategy’ text. Here’s what he drew out for me as part of the discussion:
If you’re not familiar with this framework, basically the way it works is you list out the core dimensions of value for a product (defined as the popular ‘jobs-to-be-done’) along with price on the x-axis and then contrast the collective value prop of a product with any internal or external competitive alternatives by drawing a line across the ratings for each dimension. High is better than Low for every dimension except price where low is more attractive. You then ask yourself, is the collective value-prop for the product at the proposed price point, a “FIT” for a meaningful segment of the market that would rationally pick it over the alternative? The above chart describes how they thought about this for Office Cloud. Office desktop was a perpetual license product where both business and consumer customers had to pay for upgrades. It had extremely rich and powerful feature functionality that had been developed over 25+ years and its UX had been perfected. Where the desktop product was weak though was on collaboration and web-native functionality such as multi-user authoring and sharing, it didn’t have storage embedded and had clunky connectivity to Sharepoint, all areas that were becoming more important with the Cloud. When they first built Office Cloud, the team focused on capabilities such as multi-user-authoring, and embedded cloud storage for sharing as well as native Sharepoint connectivity. They also priced the product as a SaaS offering (instead of perpetual) with a 1/3rd entry point that was a subscription but eliminated the need for upgrades. In terms of segmentation, they then asked themselves who this collective value prop would be a good strategic fit for and where they would need to price it to effectively win in that market without adversely affecting the desktop Office business. Where they landed initially, was on segments that included students, legal firms, journalists, and researchers. These groups valued the collaboration features in Word and Excel more than they cared about the loss of features and the slightly degraded user-experience that came via browser access vs. a native client. Yes, the product was generally available to all, but they focused outbound marketing efforts toward these target segments. Now over time, the strategy evolved as the product improved in feature functionality. Office 365 has ended up becoming Microsoft’s preferred product for everyone and resulted in a dual-entitlement strategy with desktop entitlement for all O365 customers now that they have tightly integrated desktop with cloud-connected features. This is a fantastic example of a very coherent ‘Approach 3’ undertaken successfully with a business model change as part of a complex-to-complex platform transition. By recognizing and focusing on net-new features that leveraged the new platform’s capabilities (multi-user authoring, providing embedded storage and native SharePoint connectivity, and interoperability) and using those to offset the feature gaps against the original product, Microsoft masterfully navigated the transitional challenges of this type of transition and allowed users to self-select to the right product without any adverse impacts on the business model.
Analyzing Alteryx’s re-platform strategy retrospectively
The company’s cloud re-platform efforts involved several stops and starts over the years. Here, I’ll focus on the most recent effort to build and launch ‘Alteryx Analytics Cloud Platform’ based on the revised R&D strategy resulting from the Trifacta acquisition in late 2021 which entailed a split-plane architecture.
Transitioning Alteryx’s flagship offering to the cloud represented a classic complex-to-complex platform transition, i.e. the hardest kind. That meant that we had the same fundamental three strategic re-platforming options available to choose from. The re-platforming effort would consist of two core pillars of effort: back-end and front-end. At the highest level, our thinking was as follows. The first of these pillars (back-end), while invisible to the end-user, is generally the heavier lift, especially given the strategic dynamics of the cloud ecosystem (i.e. an oligopoly with multiple platforms that each have a distinct value prop and necessitate a presence, however, have a unique set of native services that each require bespoke engineering efforts to ensure deployment and runtime compatibility). The IP to be built included services like asset management, user management and entitlements in addition to the all-important core data processing engines and orchestration. The front end entailed the actual feature functionality and user experience which in Alteryx parlance were abstracted and configurable “tools” that enable data connectivity, manipulation and automation. The front-end and best-in-class overall user experience had always been a big component of Alteryx’s differentiated value prop. Therefore, the thinking was that if we could acquire the right IP to accelerate the more commodity but still important aspects of the back-end architecture, it would dramatically accelerate the build-path efforts on the back-end and allow the company to focus its finite technical resources on the more differentiating UX aspects of the new platform. This is what led Alteryx to Trifacta (another innovator in the data pipelining space) which had recently undergone a similar journey to re-platform to the cloud. They had built many of these back-end cloud-native services in addition to a query federation layer that could link to a proprietary in-browser data processing engine or use more scalable third-party engines such as Spark for running data workloads. Using that IP as a starting point meant that the company could conceivably shift focus to the actual UX more quickly.
Despite the promise of acceleration provided by the acquisition, the company still had to choose which of the three fundamental re-platforming approaches outlined above to structure the roadmap around. Approach #1 (pursuing full feature parity) was out of the question as we didn’t have the luxury to wait multiple years for a feature equivalent offering given how quickly the market was moving. That left us with approach 2 or 3 to pick from. Approach 2 (porting a subset of functionality) while the easiest to execute, was the least attractive strategically for all the reasons mentioned earlier. While we expected the hosted cloud offering to unlock some new personas, our internal studies showed that a ‘Lite’ offering wasn’t necessarily very accretive from a monetization standpoint and it would put pressure on the brand’s perception as a premium, highly advanced, and broad do-it-all analytic automation platform. That brought us to ‘option 3’ which while most strategically coherent, was also the hardest to execute. This is the core strategy that we agreed to pursue and we set about trying to identify the right adjacent capabilities to include with the re-platformed offering. We looked at all possible stack adjacencies to understand where we had the best shot of pursuing a value proposition that sat at the intersection of potentially high-value and blue ocean (vs. red ocean).
After much analysis, debate and technical prototyping, we agreed to 1) the “must-have” features of the original platform that needed to be included in the cloud offering, 2) the set of net-new capabilities we would pursue in the re-platformed offering. The areas we identified and prioritized for the latter included:
- Auto insights & reporting: Instead of diving into the saturated and commoditized market of traditional dashboarding, the idea here was to leapfrog ahead to a re-imagined intelligent visual analytics experience that surfaced insights to end-users in a more automated manner vs. having them manually build standard charts through drag and drop. The IP for this capability was acquired
- Advanced machine learning: A next-generation automated machine learning experience that allows any motivated analyst to build sophisticated machine learning models for classification and regression types of supervised learning problems without having to learn the deep technical aspects of machine learning and needing to know how to code
- A reimagined and much more sophisticated app-building experience with more dynamic, bi-directional flow of logic and multi-page management of user-defined state
- Metrics layer: A logical and semantic layer of compound calculated metrics that could be plugged into downstream data pipelines to ensure consistent cascading of logic across the organization
- Spatial analytics: A reimagined experience for conducting spatial analytics at cloud scale and with cloud-native visual analytics functionality which was a core part of the company’s heritage and original source of differentiation
We deployed teams of PMs, engineers and designers to pursue core capabilities in each of these areas. We were realistic in acknowledging early on that it was unlikely that every one of these would be a home run. The thinking however was that we could see which areas had the strongest market traction and double-down our focus there while scaling back in other areas.
Lessons Learned – What we got right and what we didn’t in retrospect
While we got a lot of things right with this massive and complex multi-year initiative, hindsight is always 20/20 and as-is often the case with most complex endeavors, there were lessons learned along the way that would prompt many of us to make some decisions differently if we were embarking on this journey again. I outline these below.
One of the most ubiquitous concepts in the software world is that of the MVP (Minimum Viable Product) originally coined by Eric Reis of Lean Startup fame. It has since evolved and spawned many variants such as minimum sellable product, minimum lovable product, minimum differentiated offering and so on. The fundamental concept however remains the same, which is, on the spectrum of waiting for a perfect product before releasing it to releasing a completely bare-bones product, there is some threshold where despite less-than-perfect capability, the speed to market benefits offset the limitations of the feature gaps. The concept originated in the B2C world but applies to B2B products as well albeit, generally with a higher threshold. Under the “Approach 3” philosophy described in the previous section, the fundamental roadmap decision variables we had to optimize across were 1) Overall timing for release, 2) How much to focus on back-end platform utility and breadth of deployment availability vs. end-use feature functionality, 3) How wide vs. deep to go on the different net-new features described above. While optimizing across these three is an interdependent and iterative exercise, one way to think about this practically is if we were to fix the first ‘time’ variable (i.e. assume a fixed release date at a certain point in the future), that gives us a certain amount of total engineering cycles and development capacity between that point and the release date. Oversimplifying a bit (as resources are not entirely fungible), that pool of development capacity can be thought of as needing to be allocated to platform functionality vs. end-user functionality (e.g. 60% platform vs. 40% feature), and then the latter piece (40% in this hypothetical example) to the port over of original features + depth vs. breadth of the net-new feature capability areas discussed above. Yes, technically the platform functionality also has to be prioritized over a set of specs, however, these tend to be more binary requirements (e.g. availability in a specific region/cloud provider, SAML, security certs like SOC2 Type2, etc.). The tradeoffs we made here entailed going heavier on the platform functionality side of the equation (to ensure greater availability across regions and more cloud providers, particularly given the split-plane architecture). On the second decision of net-new end-user functionality, again, we chose to go broader rather than deeper. These tradeoffs were optimized to get past as many IT objections as possible and optimize bookings which was intentional. In retrospect, we would likely have been better off optimizing for fewer deployment scenarios and focusing more of that finite resource capacity on end-user utility. Additionally, we would have likely been better served by deeper feature functionality in a subset of the 5 net-new capability areas we tried to pursue as opposed to lesser depth across all five capabilities. Yes, this would have reduced our year 1 and 2 cloud bookings and given us fewer new capabilities to highlight to the market, however, it would have given us greater momentum with early customers and been more aligned with the brand’s promise of exceptional product experiences that result in maniacal fan bases.
The other big lesson learned was around the interplay between the natural software development lifecycle and typical adoption curves across different segments and reconciling that with an organization’s specific GTM strategy. Specifically, which of these should dictate the other? The typical GTM and customer adoption pattern seen across most B2B software startups, and most net-new internally launched products, is an initial focus on the SME segment where product-market-fit is honed and the product is hardened before it is taken to the more demanding enterprise segment. This was also the case with Trifacta’s multi-tenant managed offering when that company was acquired (i.e. they had greater traction with SMEs). Alteryx however, at this juncture was in the midst of a fairly significant GTM transformation from a land-and-expand to a more traditional enterprise-centric GTM motion. This entailed a significantly greater focus on larger deals targeted at the G2K segment with all the supporting investments and organizational changes associated with it. Furthermore, many of these G2K customers were clamoring for cloud and had been important voices in the company’s big-bet decision to pursue its intended (split-plane) architecture. This presented a conundrum. Should the product strategy line up with the intentional corporate and GTM strategy from the launch stage (i.e. enterprise focus) or should it follow the more traditional product adoption path (i.e. SME focus)? Not aligning with the corporate/GTM strategy (given the magnitude of that transformation) felt incoherent. Additionally, being a publicly traded company that had made a significant acquisition bet put pressure on the org to show meaningful cloud revenue traction to Wall Street. These two factors played a large role in the decision to pursue a more a platform-centric roadmap and to target the enterprise from the get-go. The challenge however, is that even with the best technical resources, it is very difficult to shortcut the traditional software development lifecycle and have an enterprise-ready and hardened version of a new offering from the first version. Not only will stability and bugs have to be ironed out, but enterprises will likely have more complex use cases that will likely surface product gaps in the early stages. These were precisely the situations we ended up with which led to deployment rates and early adopter scores being below what we had targeted. The lesson learned here was that no matter what the overarching GTM strategy is, a complex new enterprise software product will HAVE to go through the traditional release and hardening cycle and that GTM strategy should NOT dictate or override this natural cycle (even if it conflicts). We would have likely been better off starting with an SME focus and asking enterprise customers to wait until v2 or v3 before targeting the product to them, despite the slower bookings ramp.
The third big lesson learned was how to approach KPIs that create conflicting behavior and the need to have one North Star organization-wide KPI per critical initiative. I go into this lesson in detail in another post so I will recap it briefly here and suggest checking out that other post for additional detail. In short, one of the challenges with having multiple KPIs for an initiative is that there are likely to be metrics that incent conflicting behaviors. This has the potential to create confusion and derailment risk. The solution to this problem is to always have one single North Star metric for a critical initiative that you are trying to maximize or minimize and think of the remaining KPIs as constraints. The North Star metric can and should evolve over time but at any given point, it should dictate the top priority and incent the right behavior. For cloud, we defined success early on as ‘achieving product market fit and go-to-market fit’. This was translated into two categories of KPIs, the first being “Customer adoption and contentment” centric KPIs (e.g. platform deployment rate for sold SKUs, 1-month retention rate of first-time end users, NPS and SUS scores) and the second being ‘commercial outcomes’ (i.e. bookings -ACV/ARR-, renewal rates and net retention). What we didn’t do was align on which of these was the North Star metric, so when conflict inevitably arose, we didn’t have a consistent set of rules by which to address and reconcile issues. Product teams rightfully wanted to focus on early adoption and success while GTM teams were focused on achieving product-specific sales quotas. We had several very large and strategic customers raise their hands and ask for the product to be added to their order forms shortly after release. We knew at the time the product wasn’t entirely ready for them (e.g. it may not have been available on their preferred cloud provider, in their specific region or may not have had a must-have feature necessary for their bespoke use-case), but because of the focus on bookings as a top-level outcome we were chasing and had incented the teams around, we ended up selling it to them anyway with the expectation that the roadmap would catch up in short order. The outcome of this dynamic was predictable in retrospect and not great for us. Though we hit several commercial KPI targets that year, the product didn’t have the deployment rates we wanted, didn’t meet the NPS targets or develop the reputation we wanted amongst all early adopters (many of whom weren’t the target ICP). Our cloud renewal-rates for customers who didn’t sign multi-year contracts were also not where we wanted them to be. Had we focused on a single North Star KPI, we would have likely increased focus on customer success and adoption at the early stage versus trying to go harder at distribution right out the gate at the expense of creating a durable revenue stream powered by a positive reputation-driven flywheel.
Conclusion
Platform shifts have historically been and undoubtedly will continue to be a defining feature of the evolutionary innovation cycle in the software industry. The introduction of certain types of platforms that trade-off one value vector for another (e.g. technical sophistication for reach) can present an adjacent market expansion opportunity for incumbents or it can spawn an entirely new set of “new-platform-native” competitors as was the case with mobile in the last decade and a half. The hardest type of platform transition to navigate, however, is that from one complex platform to another complex platform and history suggests that the odds incumbents face in this journey are not in their favor. I hope that the insights, mental models, frameworks and lessons presented in this post can help technology leaders who are facing this challenge design more coherent strategies and increase their odds of successfully making it to the other side.