Big bang launches are ineffective because you aim to drive users into a monolithic product that hasn’t been built and tested piece by piece. The metrics are invariably weak, and there’s just one big test at the point of launch, so its hard to debug the issues, and you’ve already spent a lot of marketing energy pushing users into a broken product.
It’s better to decompose the product vision into core value hypotheses, and build them as user-facing chunks of value that can be tested independently. This allows you to incrementally tune in your marketing messaging and acquisition tactics a bit as well, so that when you do run a major campaign or ‘launch’ you will have solid end-to-end metrics from marketing to product.
Big bang launches
The ‘Big Bang Launch’ is when you invest significant time and energy behind marketing a discrete launch that has not been iteratively developed and tested. For example, a startup builds a new iPhone app, only tests internally and ships with app store featuring, press push, and attempted viral invite campaign before its done any cohort-by-cohort beta rollout.
Why big bang launches are problematic
There are many typical problems with the big bang launch — they are often rushed out and riddled with bugs, have janky UX flows with massive onboarding retention leakage, and fail to hit their marketing objectives because the messaging and core value isn’t coming through, and the mechanics of message-to-onboarding are too cumbersome to convert.
I see this issue come up all the time — almost every startup tries to launch things in chunks that are too big. Indeed, this post was inspired by so many recent conversations coaching away from big bang launches, and a recent tweetstorm by awesome veteran startup exec, product, and growth master Andy Johns.
What leads people to these big bang launches
The two most common thing that I see driving startups toward the dark side of the force with these big bang launches are; 1) a misunderstanding of how to turn vision into strategy, 2) a misunderstanding of how to turn strategy into tactics.
- There are two classes of misunderstanding around ‘Lean Startup’ that I commonly see. some people think you can just ship a random feature and iterate your way into a product without any vision just by asking users what they want. This doesn’t work; you can’t make something people want by just asking them. Incremental rollout of a non-vision is a non-product. On the other side, some people think that if you do have a big vision, that implies a big bang launch. This approach fails because there’s a long way between vision and pixels. You need a process to ‘dial in your vision’ and iterate on the manifested product as you build it, and this process needs to be tailored to the particular product.
- When folks do roughly understand that they need to both have a vision and ship it incrementally, I still observe significant misunderstanding how to turn strategy into tactics. The most common problem here is failure to see opportunities for fine-grained decomposition of the product into user-facing-value chunks that can be tested independently along the path to dialing in the vision. So the team ends up missing the plot on incrementalism — instead of a single big bang launch, they break it donw into a few smaller big bang launches. It’s a simple ‘product debug’ reality that the most stuff you ship at once, it takes longer to debug what’s wrong if you don’t like your metrics, whereas if you can ship isolated tests, you can debug them faster and hopefully get to a working manifestation of the vision faster.
Dialing in the vision
First you need a strong vision — and not just what but why.
Vision fuels strategy, and strategy fuels tactics. ‘Dialing in the vision’ is about getting the strategy and tactics right enough that the vision manifests as product. Product strategy for an early stage startup should be lightweight and user-focused with baked-in test-revise loops.
Let’s talk about the first 6 weeks or ‘bootstrap phase’ of a new product. Planning even 6-10 sprints ahead is hard, as core hypotheses can be invalidated during just the first 1-2 sprints and change how you see the manifestation of the vision. So rather than a traditional heavyweight and static multi-week product roadmap, I encourage a fluid ranking of MVPs against core hypotheses until the core product value has settled — meaning it tests well and feels solid to the product leader.
Prioritize the core value hypotheses first. Build out MVPs against those hypotheses and aggressively get *intentional user feedback* — this means you are designing user interviews based on hypotheses, reviewing analytics, etc. This is not the same to just blindly synthesizing inbound from an email or social feedback channel. Dialing in the vision is a more top down intentional and hypothesis-based process than a bottom up synthesis of inbound opinion.
During the ‘bootstrap phase,’ where our core value hypotheses aren’t validated yet, we acknowledge that we know nothing, we have no users, and our best bet is to build toward a clear vision in incremental chunks so that we get our product working with users piece by piece. The good news is that this approach is way more likely to end up with a product that drives solid metrics once launched publicly.
Always be asking yourself; ‘what user-facing value am i trying to test here?’ Don’t over-design and over-engineer random screens just because they are part of your theoretical future flows. For example right now I’m working with one startup on a new residential real estate shopping experience where the core hypothesis is that users want to find which neighborhoods to buy in, and we can play a digital financial advisor and recommend which neighborhoods represent the best buys for the user. This company is ultimately a mortgage bank, and so there are tons of things that need to be built out for this business, but the core product value hypotheses can be tested without most of it being built out.
This product really just needs to 1) test that the message and value prop resonates — that users want to shop for houses based on neighborhood affordability, 2) successfully onboard people through a financial advisor flow and get their monthly income and debts to estimate their affordability range under different financing scenarios, 3) test that it can successfully recommend relevant neighborhoods based on user shopping criteria. These three core tests drive out the must-have functionality required to test whether or not the entire product will work. Ultimately a mortgage bank wants to make loans, but if the game is to test whether a new kind of shopping product can acquire users upstream from other lenders and therefore have lower CAC and different growth mechanics then these elements can be tested each in isolation and none require the technical and operational backend to make loans.
So if you’re building something ‘because you need it’ — be careful, ask yourself, ‘what user facing value am i trying to test here?’ If the answer is ‘well i’m not testing right now but we need this later,’ then you have failed the early stage startup prioritization test, and this work should probably be cut from scope until the core hypotheses about user value appear that they are most likely not false.
Dialing in the product vision affords parallel tracking and milestone-based triggers
Clever decomposition of work and parallel tracking of it is one of the keys to early stage startup success — great startups seem to go so much faster than everyone else because they are doing exactly the right work at the right time to drive traction as fast as possible. Many small things are happening in parallel in a clever way to make it appear that the startup is ‘further along that it actually is’ — in other words, it only has all the right stuff it needs to grow as fast as possible and nothing more. As an outsider, you might see where a high-growth startup is and assume it has many things in place that it does not, because it has smartly chosen only those things which will prove out the user value and then scale acquisition.
In the house shopping example above, dialing in the vision can be done in parallel with any lending operational work, or the lending operations folks can trigger their roadmap milestone starts based on milestone achievement on the product side, which would ensure there’s never too much speculative build relative to product uncertainty and traction.
As another tactical note, the marketing messaging and acquisition models can and should be tested in parallel with product testing and be used for cohort selection and acquisition during beta. The same hypothesis-based approach should be used to test messaging and acquisition UX mechanics; does the message resonate, does onboarding have solid conversion, how many of our products core value target action do we drive, what is weekly retention for test cohorts older than a month, etc?
The Big Bang Launch conflates the destination (the marketing launch) with the journey (the product build). Vision vs. incrementalism as a false dichotomy — you can and must do both. Vision does not mean big bang, and incrementalism does not mean asking the users to tell you what to build.
Decompose your product vision into core value hypothesis that limit what you need to build in order to deliver maximum immediate user-facing value the fastest. Parallel track your product build with other aspects of marketing and ops, trying to trigger milestones in other functions only based on achievement of certain milestones on the product side — since that is where maximum uncertainty lies, and your job is to minimize uncertainty of your core value hypotheses being true as quickly as you can.