Scaling Customer Success at Retool
Edit: Since originally writing this post, we have restructured as a team, impacting some of the roles and functions presented.
I joined Retool 656 days ago. It’s somewhat jarring to put that number on paper considering how distinctly I remember the early days. Having come from software engineering/consulting at a corporation ~3000x the size of Retool, I had little understanding of an enterprise go-to-market strategy, how success fits into broader company operations, or the abundance of creative greenfield work in a fundamentally engineering-driven role.
In the past year, we’ve more than tripled team headcount, quadrupled as a company, and vastly matured our operating structure and success methodology. There’s been a slew of learnings of working in the field, so thought I’d break down how the team has shaped out, where we’ve put focus, and what I believe winning looks like.
The Beginnings
Our team was initially comprised of a scrappy group of Deployed Engineers (DEs)— most of us hadn’t been in a true go-to-market function before, but had dabbled in some variation of engineering, product, consulting, or operations in the past. I joined at what felt like an initial inflection point for the company: we had clear product-market fit, an ardent community of enthusiasts and customer champions, and were pioneering a solution at the cross-hairs of two rapidly growing markets (low-code and internal tooling). The role itself was beautifully malleable: our work began immediately post-sale, with goals loosely defined yet implicitly understood. DEs were given a book of business of ~25 enterprise accounts (~5–8% total ARR) to manage, ensuring a seamless deployment, healthy builder experience for use cases, proper adoption of tools, and ultimately expansion to new business units. With north-star goals focused on NDR, we had autonomy and flexibility in defining the activities to best achieve such outcomes. The team lived in a space of experimentation and foundation-setting: with most of us relatively new to the role, we needed to establish processes to operate under while testing new ways of working with customers and measuring health. The working surface area was broad and generally entailed:
- Core customer work: technical training/debugging, joint account strategy, app prototyping/architecture, feature scoping, stakeholder alignment, business reviews, customer on-sites
- Internal Meetings/Operations: Team weeklys, internal account syncs, account updates, interviewing, product feedback sessions, 1:1s
- Auxillary Work: Due to the relative nascency of the team, our work extended beyond pure technical advisory work into that of a CSM/TAM (value alignment/driving adoption/contract negotiations), Product Liaison (prioritizing and scoping features with EPD), and Success Operations contributor (building out apps or tooling for capturing customer data, building data pipelines, instilling processes to ensure consistent information capture)
- OKR/Project Work: Our OKRs drive most of the non-customer-facing work. This has included establishing and building customer health scores, rewriting the field engineering (Support, Success, Sales Engineers) onboarding guide, implementing customer adoption initiatives, and building the team
Many of the OKRs from the last 6 quarters have addressed similar core objectives, though through slightly varied angles and approaches. As we’ve extracted larger themes from past wins/losses, we’ve begun to hone in on more impactful areas to drive growth. An example of this would be an OKR on health scoring — an early iteration of this included some lightweight data mapping and modeling to directionally understand how a customer was using the tool. After a few months’ reps with the health score, we better understood where the model was weighted too heavily, the data inputs we hadn’t considered, and the increased statistical significance of a more advanced model. We partnered with the data team to build out the recent iteration, including running discovery, scoping, and testing with sample inputs along the way.
How (and Why) We’ve Evolved
The team’s undergone significant change in the last year. Some highlights have included:
- Bringing on a head of customer success
- Spinning up a true customer success management function
- Creating a professional services team and offering
- Consolidating Sales and Success under one roof
- More granularly breaking out roles and responsibilities across pre/post sales
- Redefining our customer health score
- Establishing clearer metrics and goals across the customer lifecycle
- Splitting out the customer engagement model and success tiers
- Establishing a set of core Retool value drivers
Most of these changes were proactive and top-down — the growing volume of customers necessitated building out new functions, aligning under a shared vision, and establishing norms around customer health.
Yet with the product and motion still in flux, a considerable amount of the tangible change was derived through boots-on-the-ground fieldwork. Success is in the game of pattern matching and rapid iteration, so while we had opinions on where and how Retool fit into a customer stack, these beliefs continually evolved to reflect the realities of our customer usage. Many core product and process developments were rooted in an acute customer pain or collection of themes. These have included:
- The Introduction of RuntimeV2: As the product matured, we noticed customers building larger, more complex applications that exceeded many of our original expectations. Load time exponentially increased and interaction drastically slowed. Ryan noticed this pattern quite early on and addressed it directly: he ran an audit of customers facing performance issues with associated financial/contraction risks, created enablement materials on best practices for app architecture and performance debugging, and proposed the need for a Performance taskforce to Leadership. Within a couple of weeks, we spun out a dedicated performance function to identify root causes behind performance degradations and define a path forward. We ran an analysis on the volume and types of components causing issues, the breakpoint where Retool apps began to slow, and (the lack of) product alerting to better inform users of size. Engineering concurrently worked on a rewrite of the core Retool runtime, which has dramatically improved load times, interaction speed, and general application snappiness.
- An Updated RBAC Model and Permissions Structure. Permissions are a difficult and nebulous problem to address correctly from the start. As a platform that requires both app and resource-level permissions, a need for builders to share their apps with adjacent teams, and simple methods to provision new users, complexity naturally emerges. As customers began rolling out apps to a broader subset of users, we noticed difficulties with the core permissions model and a lack of best practices. Christie began to unearth some of the underlying issues we saw in the field — she documented the ways in which our permissions could be applied, including a diagram depicting all available actions and operations. She paired closely with our Hub team (also working as a Hub Liaison) to rethink the RBAC controls and necessary supported operations. We’ve since rolled out a few minor updates to solve the most pressing needs while working through a longer-term iteration and model.
- Documentation and Best Practices: There was little formal documentation when I first joined. We had lightweight starter guides for expectations and early asks, but lacked robust writing on core operations, systems, onboarding, and technical details of the tool. I spent a considerable amount of my early months writing, primarily on a field-engineering technical onboarding guide with a map of the technical landscape of Retool (app building, SSO, authentication, deployments, scaling Retool). Many sections were also delegated out to more tenured folks with subject-matter expertise (considering how much I was still learning myself 🙂). The success team has led (and continues to lead) many documentation initiatives, including Julie’s Cloud to On-Premise Migration guide and Ben/Alejandro/Justin’s improvements to our deployment guides.
- Product feedback and iterations: While we spend the majority of our days interacting with customers, it’s easy to let this context and insight remain in our heads. There’s been significant work in improving the ways we work with Product, including redefining and formalizing a liaison program for DEs, creating feature-specific channels for customer champions to interact directly with our EPD team, and defining new feature one-pagers to explain the customer implications of the ship (example below). Gopal has continued to capture the qualitative side of conversations through his Customer Research Group, in which he documents and extracts larger themes from his hundreds of conversations. The document is 92 pages today, full of insights and direct customer feedback on the product.
While consistent client work can take its toll, the constant exposure to our end users has also been a hugely beneficial input to some of the most impactful product ships. They’ve helped us shift direction by revealing product shortcomings while informing further thinking on the underlying implications. While the early work on performance was a response to apps that weren’t cleanly supported on the early runtime, it served as a necessary learning in the types of apps we wanted to support and advocate for. While the updates to deployment docs addressed some real challenges, it’s raised questions on the prescriptivity of deployment configurations. When we see customers taking a longer than expected time to launch an application to production, it forces reconsideration of whether our expectations were properly set and the team executed against roles.
While it’s not uncommon to see customers just ‘get’ Retool within a few weeks of tinkering, we’ve amassed a majority of learnings from those times when this isn’t the case. It’s allowed us to introspect on where the product or process doesn’t scale, and drive direct actions to address.
Growth at a Startup
Startups are really f*cking fun when things are going well. When I first joined, there was little indication of the impending tech market crash, an incoming onslaught of competitors and undercutting, nor the need to have a tried and true quantitative definition of value. This last year has been the honest vitamin vs. painkiller test.
I’m happy with the moves that Retool has made. We intentionally raised funding at a lower valuation in a time of drastic overvaluations, dropped our prices while other vendors raised, and continued to invest in being in-person first. There’s been a focus on the long game with a knowingness that it’s not an easy game to win.
I recently started Claire Johnson’s Scaling People book on Kindle: a quote from an early chapter:
A few years ago, I was giving a talk at a 40-person startup. During the Q & A, someone asked what processes I thought the company should put in place. My answer was “I’m not going to tell you which proccesses you should put in place. But I will tell you that you need them, and you need them sooner than you realize.” When the person asked why, I said, “You know why playing the game is fun? Because it has rules, and you have a way to win. Picture a bunch of people showing up at an athletic field with random equipment and no rules. Someone is going to get hurt. You don’t know how to play, you don’t know how to score, and you don’t know how to win.” It’s critical for companies and teams to establish the playing field on which everyone participates and marks progress.
For some time when I first joined, I don’t know if we totally understood what Success meant for Retool. It was a newer market with much to be seen and learned. I feel the narrative has changed recently — we understand the market we’re going after and the potential of our offering. Our mission has adapted to account for the hiccups seen in the last year, and it’s been a dominant driver in both our product direction and selling motion.
So How Do We Win?
In the year and a half that I’ve been at Retool, I carry the same conviction in the product and team that I did when I first started. Staying on GTM wasn’t the explicit initial intent, but I believe we’re at a point in our trajectory where winning is largely dependent on a strong go-to-market strategy and execution (alongside a product that meets the needs of our customers). We’ve seen a lot in this past year, including teams churning off Retool for alternative solutions, procurement coming back with a fraction of the previous year's budget, and teams deprioritizing internal tooling (with certain internal tooling teams being laid off or relocated). I think of a GTM motion as true alignment across sales and success in how to sell, position, and grow Retool within companies. It is alignment on how we are delivering and capturing value, and alignment on the domains, functions, and use cases that should be built on Retool. It’s clearly understanding how roles play together, and how shared goals translate to individual incentives and workstreams. It’s moving from reactive to proactive.
Why I’m Energized
While there’s been a small but growing Retool mafia going on to start their own ventures, the majority have stuck it through the ebbs and flows. Why?
The long-term potential for Retool is still largely unproven, with the software market still open for disruption and shifts. We continue to see many large, flourishing companies that simply don’t have a consolidated technology to manage their operations.
In the near term, I plan to continue working to build out our Scale motion — engaging with a larger book of business, running experiments and programs on our long tail of customers, and onboarding new team members (while continuing to write up learnings!). I do see some more cross-over with Product in the longer term, but only time will tell. 🙂
Much to come!
-Sachit