Het aantal klanten in de titel is een voorzichtige schatting op basis van mijn laatste 6-jarige baan als groeibewerker in drie softwarebedrijven. 94% van de 1,6 miljoen klanten zijn niet alleen "productgebruikers", maar daadwerkelijk betalende klanten. Grote dank aan professor Sean Johnson voor de uitnodiging!
Hoe ben je begonnen met werken aan groei voor startups?
Ik begon als een management consultant. Ik hield van het uitdagende werk en slimme mensen, maar ik hield niet van de politiek en hiërarchieën. Ik vond het ook niet leuk hoe consultants niet bepalen of hun werk in de wereld wordt gezet. Vaak doe je aanbevelingen die nooit worden uitgevoerd. Maar ideeën in de wereld brengen, is hoe je leert en ik snakte naar dat.
Ik had een hypothese dat tech-startups wat ik leuk vond aan management consulting hadden, maar ook de tekortkomingen ervan bevredigden.
In terms of how I landed a gig, I sent one thoughtful email to a founder, early employee, or investor every day for three months. I only reached out to people who genuinely excited me — that’s key. I also didn’t ask them for a job. I just asked for an opportunity to learn. In hindsight, that was a breath of fresh air for many since they’re always being asked for jobs, money, favors, etc. Dozens were happy to meet.
Along the way, the network I was building recommended following experts in the space. I started reading blogs like Andrew Chen’s, and I was blown away at the possibilities. The space was starting to evolve, and I was all-in on becoming a part of it.
What are the skills or competencies that you think are most important in someone working on growth?
1) Commitment to lifelong learning. A lot of the best test ideas come from customers. The next best ones usually come from observations and inspiration not at work. People on a quest to read, explore, and acquire knowledge outside of the office always benefit the team’s execution inside the office.
2) Humility. Right when we think we’ve got the world figured out, a test surprises us, and users do the complete opposite of what we think. You need to always balance the pride and confidence in all the progress that you’ve made with the reality that you’ve got so far to go.
3) An appreciation for process. On the outside, our team looks boring. We meet at the same time weekly. Our agenda never changes. We prioritize, test, measure and reflect on experiments. And repeat. The predictability isn’t for everyone.
4) Grit and perseverance. We’re like “weather-men" and “weather-women"–we’re paid to be wrong a lot. Even as we continue to practice our craft and commit to process. If you don’t have the stomach to appreciate the struggles and the journey, it’s not for you.
5) Someone who values an “idea meritocracy." Ray Dalio coined this term at Bridgewater, and it’s applicable to any team hell-bent on performing, not just growth teams. Best idea wins, period. We don’t care about whose experiment it was. We just care about putting the best one forward. The ship sinks or sails, and we’re all on it.
What’s the ideal growth team constellation?
Early on, the teams were just me (an analytical person with some DB skills to find product bugs) and a full-stack developer. Overtime, they got more sophisticated. Below is a strong team make-up and close to what I have today:
1) Growth team lead: ensures morale is strong, the team stays disciplined, and we stick to process.
2) Technical marketer: doubles as a growth product manager, measures experiments, and communicates outcomes and learnings to the team (to help us get smarter).
3) Full-stack developer: can generally set up 60–70% of tests without help.
4) Full-stack designer: can polish experiments or take the lead if more front-end focused.
5) QA engineer: we don’t have one, but I’ve seen what it can do. Many of the best wins are just fixing broken stuff, and people obsessed with QA can be a game changer.
This all said, our team is still resource-constrained. Even having raised tens of millions. We’re constantly having to give up developer time to other projects. But operating with constraints is way easier than operating without them.
From a process perspective, how should growth teams organize?
The first startup I worked at was small enough where we got away with sloppy organization. Early on, the process was just “hit the database, find something broken, show and convince a developer," and hopefully it was enough to have them resolve the issue. Bringing a solution to the table also goes a long way in getting things done faster.
I also did more “product growth" vs. “marketing growth" because we were capital-constrained. We only raised a couple million. So that meant focusing on “product." I later learned this product-first mentality was hugely important regardless of how much capital you had. At the time though, the constraint just forced our hand in a positive way.
Brian Balfour’s Growth Machine article is 80% of our current process. If you follow it rigorously, you’ll be ahead of 95% of growth teams.
What are some top growth team lessons?
1) If you can’t communicate and convince an engineer a task is worth their time, you’re not on a path to creating an ideal growth team culture. While you might get a few tests built and deployed quickly, you’ll move much slower long-term. Most developers are told what to do too often by people who aren’t as smart as they are. You’ve got to respect their criticalness to your craft and ensure they know they’re an equal partner vs. just a means to just ship the team’s idea.
2) It’s not just about finding people who have the skills. They have to appreciate the uniqueness of the work. Much of the best growth work revolves around a) setting up process to always be close to and communicate with the customer and b) fixing the biggest things that are broken. As a team, we rarely build features that are shiny and new–we just iterate on what already exists. Because our core product already exists. This iterative mindset vs. the “making something new" mindset isn’t for every developer, designer, marketer, etc. The work usually isn’t sexy. But some love it and get excited about moving key metrics. You’ve got to find those with a world view that aligns with the team mentality you’re striving to build.
3) Growth teams need buy-in from the top. If it’s not there, don’t bother. Much of your work won’t help grow the business. CEOs, founders, etc. need to be on board that a) it takes time to learn and b) while success rates will improve if you’re following process, you’ll still be wrong more than right. Outside of patience from the top, you also need committed resources because without it, you can’t succeed at the next lesson below.
4) Regelmatig starten en afronden van tests is bijna alles, vooral in het begin. De formule voor groei is gelijk aan "aantal uitgevoerde tests" * "gemiddelde testimpact" * "test succespercentage." Vroeger was "aantal uitgevoerde tests" de enige invoer in uw beheer. Als u regelmatig begint met testen en deze afrondt, krijgt u een beter inzicht in de prioriteit, en worden de andere invoer verbeterd.
Any big wins you’re proud of?
To the point of, “Some of the best wins are just fixing broken stuff," 4-weeks into my first “growth job" I realized we weren’t telling users during sign-up that they had to activate and confirm their account via email. We found it out by getting outside the office and testing half-a-dozen coffee shop patrons with the flow. We fixed it and activation rates skyrocketed. It was a “million-dollar annual revenue" win that took less than a morning between spotting the problem and deploying a fix.
A more complex win my team’s (along with our Data Analytics team) making progress on today is “how we use churn scores and several other variables" to dictate “how we issue customer discounts and coupons." We’ve already saved the business hundreds of thousands in annual coupon costs with no statistically significant negative impact on sales and revenue. We’re thinking the savings will climb to $1M+ as we continue to hone the model this year.
How have things evolved from your early growth days and today from a growth perspective?
I’ve come to have much more appreciation for well-rooted process. A lot of my time is now spent thinking and executing on how we master the process vs. hit the end goal. Because the best way to hit the goal is to double-down on the process (assuming it’s a good one).
I’ve become much more focused on the inputs (i.e. executing the process) because I know that’s the best way to move the outputs. Every growth team is trying to move an organization’s KPIs, but many are too focused on the end-KPI itself, instead of the inputs that’ll get you there.
What’s a typical day like for you and your team? Typical week?
How do you figure out if you have a product issue or a growth issue?
Great question. While everyone is pursuing Product Market Fit, when you ask most what that means, they’re not sure. That’s because too many founders and investors toss around the phrase loosely but can’t translate it into an actual equation. Social Capital’s Diligence Series solved this problem, and their “growth accounting" and “Quick Ratio" measurements will tell you if the product’s ready for growth.
If you were starting from scratch with a new company, didn’t have the benefit of a growth team, etc., how would you organize to get to your first 1,000 users?
Before I went after even 1K users, I’d measure if a startup was ready for user acquisition with any kind of scale. Again, the Social Capital Diligence Series is my favorite method for measuring Product Market Fit. If it looks healthy, I’d then use the Bullseye Framework (Pro Tip: go read the full book Traction) for starting and finishing tests across user acquisition channels to find the ones that lead to good scale and unit economics.
If P/M Fit doesn’t look promising, I’d start talking to users to figure out how to improve. Too many companies think “they just need a great growth marketer" when in reality their product isn’t ready for one.
As the growth team, how do you influence making product improvements?
Outside of the tests we run to hopefully improve the product, one of the best benefits to a growth team is “the world view it can instill" in the greater organization. That’s actually where much of the magic happens. If all of the sudden support teams or sales teams or non-growth product teams start making decisions based on quantifying things like impact, confidence, and level of effort (i.e. ICE), the whole organization improves its productivity.
What part of the funnel do you think is the most important?
This just depends on if the product is pre- or post-P/M fit as well as what resources are available. Speaking generally though, I’d say many companies are too focused on marketing levers (i.e. acquisition) and not enough focused on product levers (i.e. activation, retention, and referral). Acquisition is usually the easiest of all levers, so I’d say it’s natural to gravitate towards it and spend a disproportionate level of time there.
How do you prioritize acquisition work?
The two frameworks mentioned above: 1) The Bullseye Framework for finding channels with good scale and unit economics and 3) ICE for figuring out which experiments to prioritize in what channels.
How do you balance qualitative vs. quantitative user feedback?
One great way is to “quantify qualitative feedback." Sometimes what gets fixed first in a product is based on who received the worst support call from a customer. Or the bug the CEO finds. This is usually not a great method for prioritization.
Instead, I like Pareto Charts to quantify what customers are saying. Asking customers, “How can we make the product better for you," theming responses, and segmenting the data by light/medium/heavy users is a great way to build confidence in the impact of fixing a specific bug or adding features. The goal of the chart is to understand “what small subset of themes comprise the majority of problems and opportunity?" We don’t focus on the things that extend beyond the horizontal 80% line.
Another great tactic to ensure a constant flow of data around “support requests and bugs" is to have members of growth team subscribe to support inboxes and regularly add data and themes to their Pareto.
There’s also new products like Buglife to help make capturing valuable feedback easier. While I haven’t used and can’t vouch for the product I’m definitely going to give it a shot. They’re hitting on a key problem.
When do you think it makes sense to transition to a full-fledged growth team?
Post-P/M fit. 20+ employees. My answer isn’t too scientific, but basically once the organization gets large enough where everyone’s job is no longer just to grow the company. In the early days, Alex Schultz says it best, everyone should be the growth team.
How do you decide what to focus on?
How do you estimate impact?
I haven’t found a bullet-proof answer yet. The important thing is a) that you try and b) apply a similar method to all potential tests. We tend to look in three places:
1) Primary Data: do we have any customer data in support of the idea?
2) Secondary “Close Proximity" Data: even if we don’t have customer data to support the trend, is there strong evidence specific to our space that we can have confidence in?
3) All Other Secondary Data: have we read something, that while not specific to our space, we have strong conviction is applicable?
Getting impact right improves in time (as you capture more primary data). Still, I’ve found that even if you have little-to-no customer data to start, going through the three tiers above is a great way to eliminate bad ideas. So, while you might not find the best idea at first, you’ll eliminate the worst ones. And if you eliminate the worst ones, your probability of success improves because your denominator in the “number of tests we could run" gets smaller.
How do you track performance of your initiatives?
1) The Input Metrics: are we starting and finishing tests on a regular basis (for us, weekly), are we having our weekly meeting every week (creating great habits and continued momentum is key), are we prioritizing using ICE and embracing an “idea meritocracy," and are we capturing, socializing, and reflecting on every test’s learnings as we decide what to test next?
2) The Output Metrics: for us, we just make sure to only have a handful of metrics we care about to improve focus (i.e. 2–3), and we also ensure those metrics are tight with our company’s greater goals and strategic plan. i.e. Are we striving for profitability or topline growth? Are we doubling down on finding more new customers or better retaining our current ones? Of course, we’d like to move all these metrics, but we know that’s naïve. In order for a growth team to be successful, they need to get a firm answer from the top of the company in terms of priorities. Everything can’t be equally important.
How do you keep the team enthusiastic in the face of adversity — failed tests, etc.?
Om te beginnen zou ik nogmaals mensen vinden die goed zijn in veel fouten. En mensen die "procesbeheersing" waarderen en de reis versus de finishlijn waarderen. Sommige mensen zijn gedemoraliseerd als ze ongelijk hebben en anderen worden er door gestimuleerd. Omdat het hen dichter bij de waarheid brengt.
Buiten de fitheid van de persoonlijkheid stellen we de verwachting vooraf dat het een strijd zal worden. Maar dat is goed, want de meeste grote prestaties zijn het resultaat van strijd en doorzettingsvermogen. We zijn er trots op om vastberaden en standvastig te blijven tegen harde problemen.
Aan het eind van de dag herinneren we ons dat we allemaal samen zijn en dat we er allemaal voor elkaar zijn om op te leunen en beter te worden.