Sacra Logo Sign In

Scott Johnston, CEO of Docker, on growing from $11M to $135M ARR in 2 years

Jan-Erik Asplund
None

Background

Scott Johnston is the CEO of Docker. We talked to Scott to learn more about Docker's struggle to monetize on the foundational standard it launched in 2013, how the company pivoted following its 2019 recap, and what kind of company Docker is building for the future.

Questions

  1. Docker has had a long and rich history since its initial release in 2013. Today, devops, infrastructure, and software engineering look very different from what they did a decade ago, and Docker has a lot to do with that. You’ve been here since 2014. Can you recap the history of the company for us?
  2. Docker’s new focus since 2019 may be characterized by a refocusing of energy on developers and their bottom-up adoption of Docker. Can you talk about your biggest learnings from selling Docker Swarm as an enterprise offering that gave you conviction to change the model? How is Docker’s approach post-recapitalization different based on what you learned?
  3. So Docker has gone PLG. Do you see that potentially changing into the future?
  4. Recaps are challenging for the team and company culture. You went from ~420 employees pre-pivot to about 60 with Docker 2.0. Can you talk about how you rebooted the company culture and some of the key components? What did the team composition look like at 400 employees and what does it look like at 60?
  5. Between 2020 and 2021, Docker went from $6M ARR growing 170% to $50M ARR growing 733% YoY. Can you explain how Docker has defied the ‘laws of gravity’ in growth that growth rates decelerate at scale? Are you seeing any increasing returns to scale in the business?
  6. Expansion revenue has been key to Docker’s resurgence per Tribe’s numbers showing a turnaround in revenue retention by cohort. Can you talk about the operational levers you have to continue to drive revenue expansion?
  7. Let’s talk about cloud IDEs and cloud-based development. Using Docker and shifting development to the cloud creates consistency across teams and enables collaboration minus a local development environment. Can you talk a bit about what you’re seeing in terms of how cloud IDEs touch on Docker?
  8. Interoperability is key to containers making open source and its network effects key to Docker’s moat. However, that often means that Docker containers as an open standard end up bundled into proprietary products and ecosystems—for example, Github Codespaces. Can you talk about how Docker thinks about its open source business model, specifically in relation to how and where it looks to monetize?
  9. There’s a view out there that WebAssembly and Wasm containers are potential disruptors to traditional Docker containers in the same way that containers were a disruptor to virtual machines. Docker has positioned these as complementary rather than competitive. What’s the misconception people have about these two technologies that is ultimately going to drive success for Docker?
  10. A big focus in Docker’s plans for 2023 is developer productivity and particularly, safety. Can you talk a bit about this idea of safety and why it’s such a big focus for Docker going forward?
  11. In five years, if everything goes right for Docker, what does it become? How will the world change as a result?

Interview

Docker has had a long and rich history since its initial release in 2013. Today, devops, infrastructure, and software engineering look very different from what they did a decade ago, and Docker has a lot to do with that. You’ve been here since 2014. Can you recap the history of the company for us?

If you go back to when I joined Docker in 2013-14, it was just as the DevOps wave was rising. 

Before, I was at Puppet—one of the four big DevOps players alongside Chef, Ansible, and Salt—and at the time, we were really focused on this question of how you help developers create applications that impact their business and deliver them to production as quickly as possible.

Docker, which was open sourced in March 2013, just came on the scene and disrupted it. It did that because it was able to accomplish much of what that category of tools accomplish but with orders of magnitude less effort. 

Call it configuration management or automation—there's a number of different category labels that were thrown out at the time—and Docker was able to accomplish that much more than those tools in one-fiftieth the effort. That ignited my imagination when I was at Puppet, and six months later, I came to Docker

Docker had 20-some people at the time. Modern-day containerization didn't exist prior to March 2013 and then, a year later, it exploded. 

The reason is that Docker did indeed allow devs to move faster. It gave portability to applications so you weren't locked into any particular cloud or on-prem infrastructure. It allowed the DevOps tool chain just to move faster and safer.

Docker was the unlock of what the industry had been spending a good decade with DevOps looking for.

Now, with all that excitement came the dollars, the growth, and all the press adulation. The long story short is we stretched ourselves too thin looking at too many opportunities such that by 2019, despite having created this market, the company wasn't realizing its full potential. 

That led to some very, very serious discussions of options. We spent the summer really just exploring what we were going to do—we saw the potential, but the way the company was configured and growing, we weren't unlocking it. 

We’d been focused on this enterprise business, and we’d focused on production, deployment, technology, and product. 

But over here, we had all this bottom-up developer adoption, this fanatic love, consumption, and growth that we weren't paying attention to. 

Meanwhile, we were also spending a lot of money trying to acquire customers, trying to grow.

The question we asked then was, “What if we could allow the company to focus just on this developer market? Would that unlock value? Would that create great products for developers? Would that really, in some sense, allow the company to live up to its potential and the opportunity that it created back in 2013?"

I was leading product for all of that period through 2019. I was offered the job of CEO and I accepted because I, too, felt like there was an untapped opportunity here. That was the end of the beginning and the beginning of the second book of Docker.

Docker’s new focus since 2019 may be characterized by a refocusing of energy on developers and their bottom-up adoption of Docker. Can you talk about your biggest learnings from selling Docker Swarm as an enterprise offering that gave you conviction to change the model? How is Docker’s approach post-recapitalization different based on what you learned?

The big learning from that was that it’s not just about Swarm and Kubernetes. When you’re selling into a company, one really needs to—and this sounds obvious in retrospect—align your monetization as close as possible to that part of the org or those users in the org that are getting value from the product.

What we had in 2019 was this fantastic bottom-up adoption—developers just loving the product, bringing it into the organization, using it at home for their hobbies etc. 

But the monetization was over here with ops, while the bottoms-up love and out of control—and I mean that in a good way—consumption was on the developer side.

We're going in and Ops is calling us because, "Hey! My developers are going crazy over this stuff. Can you come in and talk to us?" We had to go in and even though developers knew about Docker, were excited by it, we had to educate Ops. We spent a lot of time doing education. The most cynical or negative Ops people would say, "This is just one more thing I have to manage that these devs are throwing at me, and I have to pay for it as well?”

What we had was a lot of go-to-market time, money, and headcount being spent on Ops, which was organizationally—particularly in these large organizations—pretty far away from where the bottoms-up love, consumption, usage, and real value was being recognized.

That is what we corrected with the pivot. We said, "A lot of consumption, love, value has been recognized over here with developers. Let's not try to ask ops for a check. Let's instead talk to development managers. Let's completely redo our go-to-market.”

Instead of going top-down, enterprise, kick down the door and talk to the C-suite, we decided to flip it 180 degrees and go bottom-up, developer-led, consumption-based. 

Only once developers were seeing value at scale would we go talk to their manager and ask them to swipe their credit card.

It's really about tightly coupling value realization with the same organization that holds the budget to afford that value. 

Ultimately, in 2019, we kept the open source assets, the brand, and all the developer-facing tooling, and IP but we sold off the commercial IP that was around production, operations, and tools. We completely redid the go-to-market, which became completely bottom-up and dev-led—in fact, we didn't have any sales for the first 18 months of the new company. It was all credit card-based. And we had maybe one or two marketers who were completely digital marketers.

So Docker has gone PLG. Do you see that potentially changing into the future?

Since then, we have added humans with the word sales on their badge, but even sales is predicated on this principle of, "Is the org consuming and getting value before there's a sales conversation?" 

The reason we started adding salespeople is because as organizations started to use Docker at scale, they started putting lots of seats on their credit cards—50, 100, 500. But once they started saying, "No, no, no. I want to buy 5,000 seats,"—well, that's hard to put on a credit card. We started then to have an inside salesperson who could pick up the phone, issue an invoice, and walk them through a purchasing process. 

How’s that different from the previous? 

First, it’s all inside-first.

Second, it is 100% consumption-based. The product is wired with telemetry so we can see consumption coming out of the different domains. If there is no consumption or the consumption isn’t above a certain threshold, sales isn’t allowed to talk to them. And when it does reach a certain threshold, typically they reach out to us first. Sales can then say, "Hey, I can see that you're using a lot of Docker. Are you able to scale? Are you getting the security you want? Are you getting the support that you need?”

It's a very, very qualified conversation and not, "What is Docker? What are containers? Do you know how to use Docker in your tooling?" 

There's none of that because customer's already using the product. That's very, very different from where we were prior to November 2019.

Recaps are challenging for the team and company culture. You went from ~420 employees pre-pivot to about 60 with Docker 2.0. Can you talk about how you rebooted the company culture and some of the key components? What did the team composition look like at 400 employees and what does it look like at 60?

It was quite challenging for the team. Up until that day in November 2019, people thought the company was fine, their options were growing in value, and we were on the right path. With something as significant as a restructure, you obviously can't communicate in all-hands meetings and emails and such to the employee base.

Step one was all about rebuilding trust. We were hyper transparent about everything going on. We were sharing everything with the employees in terms of where things stood from a cash balance sheet, revenue standpoint, and revenue growth standpoint. That was the start and it didn't happen overnight.

Second was about learning from the challenges of Docker’s 1.0 focus. We had many different customers, many different buyers, many different users in the previous version of Docker. We simplified it by saying, "We're going to serve developers, first and foremost."

Doing that clarified a lot, and when you shrink from over 400 people down to about 60, you need to simplify the business and the focus. 

"Is this going to serve developers?" became the North Star question of every activity we did. That then circled back to the previous thread: "Is this going to serve developers in an authentic bottoms up product-led growth motion?"

Not everyone was on board with that. They appreciated the transparency but they went, "I really want to work on this problem that's not a developer-centric problem," Or, "I really want to contribute to this open source community which is no longer as critical." 

But it simplified things a lot, got us aligned, made it clearer how we make decisions and how to decide priorities. We were just going to be laser-like focused on this community in a bottom-up, go-to-market motion. 

Over time, as that flywheel started spinning and success came in, it gave the internal team confidence. Others from the outside then saw that as well and had the confidence to join us despite the noise and the press around the restructuring in 2019—which, if you go back and Google it, practically everyone said that we were dead.

Those were the first two big things—rebuild trust with the team, and simplify. 

The third thing was to then iterate quickly, show results, and play those back to build confidence within the team that we're on the right track.

Between 2020 and 2021, Docker went from $6M ARR growing 170% to $50M ARR growing 733% YoY. Can you explain how Docker has defied the ‘laws of gravity’ in growth that growth rates decelerate at scale? Are you seeing any increasing returns to scale in the business?

First, we need to acknowledge the massive tailwinds that are helping create this market. IDC says there's going to be demand for 750 million new applications in the next two years. These are more applications than have been written in the entire 40-year history of IT—so, there’s huge demand.

That is creating huge demand for developers to write those applications but also for productivity across existing developers since this demand for applications is outstripping the developers there to build them. Depending on who you listen to, there are about 26 million developers out there today, which will grow to about 45 million by 2030.

Covid of course just dialed this to 11—now, everyone has got to get on their digital game, and that requires applications and developers.

We created this phenomenal base of consumption with our freemium model, where Docker Desktop can be downloaded for free by anyone around the world. As we grew from the pivot in November 2019, we learned where the value in the product was, who saw value, and in figuring out a monetization model, we were effectively just unlocking existing consumption and rebalancing the trade. 

That rebalancing combined with the macro forces have been the two big levers that have unlocked the growth that we’re talking about here.

Expansion revenue has been key to Docker’s resurgence per Tribe’s numbers showing a turnaround in revenue retention by cohort. Can you talk about the operational levers you have to continue to drive revenue expansion?

There's a couple good threads in there. 

One, we were very aware that from a buyer or user’s standpoint, the biggest hurdle is always the one between zero and a cent. Once someone locks into a cent, then you can negotiate: "Is it a cent? Is it 2 cents? Is it ten cents?"

Our pricing was very intentional. What we wanted people to think about was how they spend $250K-$300K per year for all their engineers, and so $5-$24 per month for them to be productive is just an easy decision.

Once we’ve established that commercial—in addition to community—relationship with them, we’re suddenly talking to the check writers and decision makers who are the managers of all those devs. That gives us fantastic lines of communication to hear about what else they want to address in their development organization and what other objectives they have for their developers.

What that has enabled is our new tiering. The features in our pricing tiers are geared towards those managers and those personas around developers. We're not trying to monetize the individual developer motion—we’re monetizing the managers of thousands of developers who need single sign-on to be able to make sure they're all authenticated and using their internal directory service, or observability to make sure they're downloading all the right content from the container image registries around the internet. Those are features for managers with a budget to pay for such things.

This gets to your second question. Once we have that commercial relationship and buyers who, as I said, are happy to pay for the productivity, they go, "My devs could really use feedback when they're building containers locally to know if they're building with a package that's outdated or not."

We can say, "You know what? We can give you 10 for free. But by the time you do your 11th, that's another charge." 

In this way, additional monetization is going to come not from jacking up the price, but using the feedback we get from managers to add more value that effectively allows us to branch into new adjacent workloads.

There's lots of things going on in that local developer desktop, many of which we're not in, today, but we can see the traffic coming in from where the developers are pulling all the containers from. We know there's unit testing, debugging, linking, and collaboration going on. Docker can have a role in adding value to all those activities.

Let’s talk about cloud IDEs and cloud-based development. Using Docker and shifting development to the cloud creates consistency across teams and enables collaboration minus a local development environment. Can you talk a bit about what you’re seeing in terms of how cloud IDEs touch on Docker?

I’ll start with the customer problem, which is that these apps are complex. Ninety nine percent of our customers are deploying the containers produced by their developers using Docker to Kubernetes. These are not just one or two containers but tens of containers, 20s, 30s of containers. It’s become a huge challenge to give developers an experience where they can locally iterate, write code, and build because replicating that 20 or 30 container stack locally is just too much for the machine.

Even with Apple pouring hundreds of cores into their M1s and M2s—which is another reason why there's still a lot of gravity on local-only dev—what we're actually seeing is many companies say, "We're going all in the cloud because the clusters are up in the cloud.”

The leading trend is a hybrid mode where developers are working locally on their container but through the magic of networking, they, being a team, are working in a shared dev cluster. 

At the center of that cluster is a dev staging version of their Kubernetes application, so the 20 or 30 containers are running in that shared cloud environment. That’s based on customer feedback, explorations, and just listening to people in the community.

We believe that that's going to be the mainstream way that this market transitions. 

There will be local development going on simultaneous with it being tethered to a shared dev environment in the cloud. That's how you're going to give dev that local feedback loop, that local performance, and the benefit of shared resources in the cloud which is very difficult today unless you're all up in the cloud. 

That’s really hard today, particularly when it comes to customizing your tooling and having that same freedom you have locally, but we're behind this trend, and you’ll see more from us on this in the coming year.

Interoperability is key to containers making open source and its network effects key to Docker’s moat. However, that often means that Docker containers as an open standard end up bundled into proprietary products and ecosystems—for example, Github Codespaces. Can you talk about how Docker thinks about its open source business model, specifically in relation to how and where it looks to monetize?

History is always interesting. The Unix market became balkanized almost from day one. There was Solaris, HP-UX, and AIX—but nothing interoperated. There are many different Linux distributions out there. The Red Hat app doesn't run on a SUSE server or on a Canonical or Ubuntu server.

We're very fortunate that this has not yet happened to the container market. You can run a container on Docker Desktop, you can run it on Red Hat OpenShift, AWS-ECS, or AWS-EKS. You can run it on Azure ACI completely unchanged, and that's really important.

What’s also important is that it then allows this massive market to form, and for us, a fantastic business to be built off monetizing just a slice of that.

A big, growing, holistic market is an opportunity for us. The fact that this market is standardized and that there's many ways in from a developer's standpoint to give value to that developer is how we see the opportunity. That is a long way to answer it.

There’s a view out there that WebAssembly and Wasm containers are potential disruptors to traditional Docker containers in the same way that containers were a disruptor to virtual machines. Docker has positioned these as complementary rather than competitive. What’s the misconception people have about these two technologies that is ultimately going to drive success for Docker?

There's two answers to that.

One is that it’s worth being skeptical anytime you hear, "X is going to kill Y." Mainframes are still a ~$20 billion business for IBM. There's still plenty of VMs out there. Java is celebrating its 27th birthday this year, JavaScript as well. Neither is falling in popularity. Overall, history would say that those kinds of assertions are pretty nonsensical.

But to answer in a more strategic way, what we were all trying to do 15 years ago—and what Docker accelerated—is this new way to create value around microservices.

Microservices allow smaller teams to own the value they're creating, ship faster, add value, and deliver value to their customers faster, and Docker rose in tandem with the trend around microservices. Docker Linux containers were the first really obvious and easy way to deliver microservices.

Microservices existed before Docker—Netflix was off doing them—but Docker made it easy and democratized the ability to deliver microservice-based apps. 

In 2016, we brought that to Windows containers, and in the last 18-24 months, Amazon has embraced the Docker container image format for serverless via Amazon Lambda. 25% of all Lambdas are now delivered as Docker containers. 

You can see where this is going. Wasm is now delivered as a Docker container. You can build them in our build technology and run them locally. Wasm is another way to create a microservice, much like Linux containers, Windows containers, and Amazon Lambdas.

Docker is such an easy way for developers to reason about and build microservices. It was applied initially to Linux containers, but it's been applied to multiple architectures since. Wasm is just one more architecture.

A big focus in Docker’s plans for 2023 is developer productivity and particularly, safety. Can you talk a bit about this idea of safety and why it’s such a big focus for Docker going forward?

Our industry has so often had to trade speed off for safety. There’s this thinking that if you're going to ship fast, you might break something or you might introduce a vulnerability—and conversely, if you're going to be absolutely safe, then it’s going to take you months to get a release out the door.

With Docker, devs won’t have to make those tradeoffs. They’ll be able to ship quickly, safely—and in the coming year, we’ll have additional capabilities in the product to facilitate that.

We've already talked about developer productivity and bringing that productivity to teams. We’ve talked about the hybrid mode of local and shared clusters in the cloud. In parallel, you’ve heard about the shift left—it's a little cliché, because so many vendors say shift left when what they’re really doing is raining data down on the heads of these poor devs, which scares them and drives them right into vendors’ business models. That's not what we're doing.

Because we're on the developer's desktop, we are there at the point of creation and when they're merging that pull request. We're there when they're pulling down that base image from a registry and they're using our build tech to create that image right then and there. We're as far left as it gets.

What that gives us is the ability to index everything that the dev is doing. Before they even touch CI, so much of security shows up once the dev kicks off Jenkins or GitLab or whatnot. 

We're able to—right there in the desktop—show them the impact of the changes they're making and suggest an alternative change if there might be a dangerous impact.

We’re able to tell them that if they merge this pull request, that's the upgraded version of this package that will actually remove this vulnerability that we just detected and what you just wrote in your text editor right there.

Giving them that feedback loop right then and there beats the hell out of where most security is today, which is that code gets pushed into CI and 20-30 minutes later, the dev gets a notification that there’s a vulnerability and they better go fix it. The dev goes, "What? That was 20 minutes ago. I've already moved on."

That's basically the leading edge solution today. Organizations have probes in prod that pick up security issues around 30-45 days after it's been deployed.

By instead keeping that feedback loop right then and there in the moment of creation—and we're pretty uniquely suited to do that—we get the opportunity to help them build safer code. And it’s not just around security—it can also help developers figure out whether they’re following their internal corporate standards, whether they’re using open source licenses that have been approved by their organization, and so on.

We can look at all of that right then and there on the desktop before they merge anything, which gives us a commercial opportunity and an opportunity to help them be more productive right then and there.

In five years, if everything goes right for Docker, what does it become? How will the world change as a result?

There is so much demand for new applications in the market—and alongside that, for developers. We want to serve all 45 million of the developers that are coming into the market between now and the end of the decade.

Right now we're largely serving back-end, server-side, or full-stack engineers. Outside that, there’s the whole market of front-end engineers, the whole AI/ML market. We're finding Docker adds value to the productivity and safety of all these different kinds of sub-segments of the developer world.

Five years from now, we want to be a big, independent, developer-focused company helping all those different developer segments ship great apps safely.

Disclaimers

This transcript is for information purposes only and does not constitute advice of any type or trade recommendation and should not form the basis of any investment decision. Sacra accepts no liability for the transcript or for any errors, omissions or inaccuracies in respect of it. The views of the experts expressed in the transcript are those of the experts and they are not endorsed by, nor do they represent the opinion of Sacra. Sacra reserves all copyright, intellectual property rights in the transcript. Any modification, copying, displaying, distributing, transmitting, publishing, licensing, creating derivative works from, or selling any transcript is strictly prohibited.

Read more from

Docker revenue, users, growth, and valuation

lightningbolt_icon Unlocked Report
Continue Reading

Read more from

Anthropic at $316M/year growing 1,341%

lightningbolt_icon Unlocked Report
Continue Reading

Photoroom: the $65M/year background removal app

lightningbolt_icon Unlocked Report
Continue Reading

Read more from