We’ve all been in that three hour meeting to review strategic options you’ve been discussing for months. Different slides, same questions, some new data, same concerns. Same conclusion: “Let’s gather more data before we decide.”
Here’s what’s actually happening: There is an insatiable thirst for data but no one can articulate the actual question they’re trying to answer. More analysis feels productive. It feels safe. But nobody’s getting closer to a decision.
The truth is, most large companies stuck in a pattern where analyzing and testing are treated like opponents instead of partners. We over analyze. We under test. And the gap between the two is costing speed, money, and competitive ground.
Here are my thoughts on why this happens and what you can actually do about it.
Before You Test Anything: Get Clear on Your Learning Agenda
Most people aren’t great at writing learning agendas. We’re trained from a young age to generate answers, rather than formulate strong questions.
Before you think about testing, you need to understand what you want to learn. Not what you want to prove. What you want to learn.
Leaders want to test complex ideas all at once. They say things like “We want to know if customers will buy our new platform.” That’s not a learning agenda. That’s a wish.
A good learning agenda breaks the complex question into small, testable components. Take your broad idea and identify the individual capabilities that make it work. Then test those capabilities one at a time.
Let’s say you’re considering a new purchase experience for your product, or maybe you’re adding a new capability to an existing product, or perhaps you’re thinking about entrance into a new market. Don’t start by asking “Will customers pay $50K for this?” That’s too late stage. Instead, break it down. What are the core capabilities? What’s the value proposition of each component?
Now you can ask specific questions. “Do users find the new checkout process easier to use than their current experience?” That’s testable. That gets you to a definitive yes or no. That tests the value proposition of one specific component.
Start small. Start cheap. You could create a paper prototype. You could mock up a single screen and walk users through it. You could build a clickable demo that requires zero technical infrastructure. The goal is to validate the capability before you invest in building it for real.
What does good look like? Specific, answerable questions with clear success criteria written down before you start. You should be able to say: “If we see X result, we’ll do Y. If we see Z result, we’ll do something else entirely.”
Here’s a critical check: Ask yourself what a mixed result would look like. How would that mixed result happen? Is there any ability to improve the question framing to prevent a mixed result? Are there any other factors that could contribute to seeing a mixed result? If you can’t answer these questions clearly, your learning agenda isn’t tight enough yet.
Another thing to watch out for: avoid multi-variable tests. If you’re testing a new feature that also has new pricing and a new interface, you’ll struggle to attribute results. Did users love it because of the feature or despite the pricing? You won’t know. Test one variable at a time whenever possible.
Bad learning agendas lead to mixed results. The worst outcome isn’t a failed test. It’s an inconclusive one that leaves you right back where you started, except now you’ve burned time and grown organization skepticism around testing.
Why Testing Feels So Expensive (And Why That’s a Problem)
Many large companies don’t have the skillset to test ideas quickly, so testing feels more expensive than analyzing.
Analyzing taps your strategy team. They’re good at it. The cost is predictable, maybe even hidden. Testing (usually) requires your delivery teams… But these people are currently working on projects with expected value.
That tradeoff makes executives pause. So they default to more analysis because at least analysis doesn’t disrupt the roadmap or require explaining to the board why you pulled resources from committed deliverables.
Look at what’s happening with GenAI right now. Some companies are still building business cases and analyzing use cases they identified eighteen months ago. Others have already tested, learned from, and iterated on their third version of internal tools. Guess which ones are actually getting value?
The companies that learned to test aren’t smarter. They just built a different capability.
Myths That Keep You Stuck
Based on my experience, I want to dispel a series of myths that hold corporations back from testing ideas. Some of these you’ve probably heard. Some you might believe. All of them are costing you time and competitive ground.
Myth 1: You need a polished product before testing
No. You really don’t.
What you need is clarity on where the uncertainty lives. Spend time analyzing the idea and breaking it into components. What do you already know? What do you think you know but haven’t validated? Where are the assumptions that could sink the whole thing?
Test the uncertainty, not the whole idea. If you’re not sure customers will value a faster version of your service, test speed separately. You don’t need the full service built out. You need to validate the specific hypothesis that’s creating risk.
Myth 2: You need hundreds of customers to validate anything
It really depends on what phase you’re in.
Early tests should be simple and scrappy. Get five people to give you real feedback on the core of the idea. Not a survey. Actual conversation. Actual observation of whether they’d use it and how. Then scale the number of users as you gain confidence.
But let’s be clear about the other side too. Don’t base a multimillion dollar investment decision on feedback from your buddy in accounting and two customers who love everything you do. That’s not validation. That’s confirmation bias with extra steps.
Be scarce with resources. Only invest to the next level when you’ve earned the confidence to do so. Start small. Learn. Scale thoughtfully.
Myth 3: Everything should be experimented on
Please don’t do this.
There’s a time and a place for experimentation. This is another tool in your corporate toolbox, not a replacement for every other decision making process you have.
Some things you actually do know enough about. Some decisions are reversible and low cost, so just make them and adjust if needed. Some situations genuinely require deep analysis because the stakes are too high for learning by doing.
The goal isn’t to test everything. The goal is to test the right things at the right time using the right level of investment.
Myth 4: Failed tests mean failed ideas
This one kills good ideas every single day.
A failed test means you learned something without betting the entire farm. That’s actually the point. You wanted to know if something would work before committing serious resources. Now you know.
Sometimes a failed test tells you the idea itself is bad. More often, it tells you that this version of the idea, with this target customer, at this price point, delivered this way, didn’t work. Those are very different insights.
Companies that get good at experimentation start viewing “failed” tests as cheap insurance. They’d rather spend $50K learning an approach won’t work than $5M building something nobody wants.
Myth 5: We can’t test in a regulated environment
I hear this one constantly. Compliance is real. Regulatory constraints are real. But this myth confuses testing concepts with launching unregulated products.
You’re not asking customers to use an unlicensed financial product. You’re testing whether a particular feature set would be valuable enough to pursue through the proper channels. You’re validating demand, user experience, and strategic fit before you invest in building something that has to clear twelve layers of legal review.
Some of the most innovative companies I’ve worked with operate in heavily regulated industries. They’ve just gotten better at figuring out what they can test within bounds.
Myth 6: Experiments should be run by the innovation team
Your innovation team might be great at running experiments. But if they own all the concept testing, you’re not building organizational capability. You’re just outsourcing uncertainty to a team that everyone else sees as disconnected from “real” work.
Innovation teams should teach the capability, not own all the testing. Their job is to help business units get comfortable with experimentation, document what works, and transfer the skills. Otherwise you end up with a small group of people who know how to learn fast and everyone else still stuck in analysis mode.
How to Actually Build This Muscle
Alright. Let’s get practical. You’re convinced experimentation matters, but you don’t know where to start. Here’s what actually works.
Start with something low risk that you want to move quickly on. You should not be learning to experiment on mission critical elements. That’s like learning to drive in city traffic during rush hour. Pick something that matters but won’t sink the quarter if it goes sideways.
Use this as an opportunity to pull your innovation team closer to the business. Have them partner with a business unit on a real problem. Not a made up innovation theater problem. A real one that the business leader actually cares about. This builds credibility for the innovation team and gives the business unit new tools.
Document everything as you go. When you run your first in market experimentation, have someone take detailed notes about the process. What worked. What felt clunky. Where you got stuck. This becomes your training material. You’re creating a “train the trainer” experience for the next team that wants to try this.
Create clear guidance for employees about when to use this capability. Not everything should be tested. Help people understand what makes a good use case. Is there genuine uncertainty? Is the cost of being wrong high enough to matter? Can you design a test that produces an actionable answer? If yes to all three, maybe experiment. If no, maybe just decide.
Be realistic about where your organization sits on the experimentation maturity curve. If you’re just starting, don’t pick a test that requires coordination across seven business units and three technology platforms. Pick projects that match your current capabilities, then build from there.
Partner with consultancies like IDEO that are masters at this. Bring them in not to do the work for you, but to teach your teams how to think differently. Good consultants can help executives get more comfortable with the discomfort of not knowing.
Don’t spend a lot of money on this. The irony of building an experimentation capability by spending millions on infrastructure and tools is apparently lost on a lot of organizations. This capability is designed to save you money by helping you learn before you build. If it’s expensive to set up, you’re doing it wrong.
Get better at learning agendas. Most experiments fail here. Teams don’t fully know what they’re testing. They have a vague sense of wanting to “see if customers like it” but haven’t broken that down into specific, measurable hypotheses. Spend real time on this part. It pays off.
Design experiments to get indisputable yes or no answers. This matters more than anything else. The worst thing that can happen as a result of an experiment is that the answer about whether to move forward is unclear. Mixed results mean the experiment wasn’t designed appropriately. You didn’t isolate the right variable. You didn’t define success clearly enough. You didn’t test with the right audience. Go back and fix the design, don’t just shrug and move on.
The Real Prize: Learning Velocity
The competitive advantage isn’t just speed. It’s learning velocity. How fast can you figure out what works and what doesn’t? How efficiently can you kill bad ideas before they consume resources? How quickly can you double down on good ones?
Companies that build strong experimentation muscles make better strategic decisions faster. They waste less money on initiatives that were never going to work. They have more confidence in their bets because they’ve already tested the riskiest assumptions.
And maybe most importantly, they create a culture where it’s okay to not know everything upfront. Where “let’s test that” becomes a normal response instead of a scary one.
