How To Deal With Ambiguity In Product Management

Welcome to the first edition of our product-focused content series, zeroing in on the work of our Product team. In this series, the Product team will explore how they operate and share insights into product development, launch and improvement processes.



As a Product Manager (PM), I oversee the two major products within the Fidel API suite, the Select Transactions API and the Transaction Stream API. This includes the initial scoping, development, launch and management of those products over their lifespan. As such, a big part of my role involves untangling ambiguity. Now, let’s explore what exactly that means.

What does ambiguity mean for a Product Manager?

Ambiguity can mean many things. Good and bad.

Positive ambiguity means having space and autonomy for developing unique solutions for the product I manage. For example, finding a new solution or application to expand the product’s market reach. It’s something that makes the PM job one of the most exciting professions out there.

On the other hand, products are complex and can change a great deal from market researching to ideation, and finally execution. That means PMs often battle with a lack of clarity, aligning expectations, and confusion.

As a PM, you’re often encouraged to “embrace ambiguity”. You may hear following any of the following phrases:

  • “In this industry, you have to be really comfortable with ambiguity.”
  • “We prefer to work with more freedom and, therefore, with more ambiguity.”
  • “We don’t want to hand information to you on a silver platter - go out and find it.”

However, I believe ambiguity should not be “embraced.” As a PM, your objective should be to eliminate it entirely. When ambiguity creeps up, your alarms should go off and you should have a framework ready to deal with it.

How does ambiguity develop within a company?

From my experience, ambiguity often nests in a specific place within a company. A PM needs to think like an exterminator, figuring out where it could manifest before going in. We do this by adopting two methodologies to scope and then zero in our hypothesis to remove it.

  1. Start by developing an external point of view that focuses on the key stakeholders, customers and market. This is where ambiguity lives and breathes. For example, predicting and timing the market acceptance of a new feature or product in line with stakeholder expectations can be like a jigsaw puzzle where all pieces look the same.

  2. Then develop an internal point of view that looks at the product itself and the engineers who are building it. In order to do the best job possible, engineers need precise and non-contradictory information that will allow for product delivery with the least amount of context switching, idle time and sunk development cost.
Information flow

Advice for New PMs

If you’re recently joined a new company as a PM, ambiguity is likely to be high during that transition. The first, and the most obvious, solution for this is data collation, but it’s important to know what types of data can be used, when and how.

Earlier in my career, I was told that the more ambiguity I encounter, the more information I need to gather. But most of the time, the amount of information on an ambiguous topic is highly finite. So what can we do about it?

First of all, we need to distinguish between types of data:

  • Secondary Research: Data that’s generated by any other source other than the product development team, such as market research. Although it makes sense to use this data to inform company strategies,  this information is often not suitable for product building.
  • Primary Research: Data that’s generated by the product development team through experimentation and builds on top of second-hand data.

In the product management community, it’s said that market failure is caused by relying too heavily on one form of research. In order to build an effective quantitative hypothesis, it’s important to apply a balanced measure of both.

Hypothesis

Your hypothesis is your first tool to deploy against ambiguity, therefore it has to be expressed numerically for precision.

There are many templates out there on how to construct a numerical hypothesis. The following components are essential:

  • Condition (feature at a certain price point)
  • Addressable market size
  • Behaviour change (purchase growth)

Start by collecting second-hand data to get a sense of the general market and the addressable market size. Then focus on your own baselines. At Fidel API, I always insist that my development team knows our key performance metrics, split by product, so we can develop realistic expectations for our own experiments.

A fuzzy or anecdotal hypothesis will not drive progress; it sets you back. Engineers need to know explicitly why they're being asked to code up a solution. Without that direction, you can’t guarantee buy-in from the development team leading to almost-guaranteed market failure.

Once the hypothesis is done and the experiment is out there in the wild, how do you measure success? Usually, it’s through starting a behavior change as accurately as you can. You won’t be able to do that without knowing your baseline and recognizing that. For example, a 5% uptick in transaction volume is not enough to pay for an experiment, let alone pay for the cost of a fully-fledged product release.

Experimentation

When discussing how to build an experiment with my engineering team, it can feel a bit like a reverse auction where the lowest bidder wins. It takes time to reach this mindset. What has helped is understanding the purpose of an experiment is to generate first-hand data, not a complete product - as shown below:

Hypothesis - Experiment pipeline

Sometimes, engineers ask how I can justify building solutions ‘quick and dirty’, instead of sustainably. Bearing in mind, technical debt is a serious problem for every company - they have every right to be asking these questions.

The reasoning is simple. If we can reliably predict the outcomes of a new feature then we build it sustainably and avoid lots of early fixes. If we can’t predict the outcomes of a new feature, we build it as fast and cheaply as possible, in order to reduce the risk of sunk development cost.

This is why when conducting product discovery we run experiments. It will inevitably take us X amount of failed experiments to arrive at the successful one. I prefer arriving there quickly, as do engineers.

Quality vs. Speed


Conclusion

  • Never embrace ambiguity. Eliminate it as much as you can so your engineers can do their job.
  • When it comes to hypotheses, be as detail-oriented as possible and express your hypothesis with numbers. In order to do that you have to know your baselines, hence study your own database (learn SQL, if that is a gateway).
  • When it comes to experimentation, involve your team as early as possible and let them have a stake in contributing to the strategy. Do not adopt an assembly line work attitude where you dump insights on the engineers and tell them to build a solution. Involve, teach, and learn from other departments in your company. At Fidel API, I am lucky to work with brilliant colleagues across different organizations - from Strategy to Customer Success - who are able to provide a point of view that is completely out of a PM's reach.
  • … oh, and try to have fun. Otherwise, what's the point?

Author's note: I would love to get feedback on this article from the wider PM community on LinkedIn. Please feel free to reach out to me with your input at: https://www.linkedin.com/in/lukaskoren/