The world’s leading publication for data science, AI, and ML professionals.

Blending Lean, Designing Thinking and Co-Development in B2B Product Innovation – A Team…

The mix of methods that allowed us to 'go deep' while offsetting the risk of building a 'one-off' (and what we'd do differently)

Notes from Industry

By Mason Adair, Jobelle Braun, Nick Cohn and Thibaut Dubernet

Photo by Sergio Souza on Pexels
Photo by Sergio Souza on Pexels

INTRODUCTION

Product research in a B2B context is more difficult than for consumer offerings due to the relatively smaller number of potential participants, the complexity of their use cases, and the difficulty of accessing them due to the high value of their time. As a result, B2B Product teams may decide to forego the type of arms-length research at scale that their consumer counterparts employ and instead choose to co-develop their product directly with individual customers.

Co-development (or co-creation) is a valuable B2B innovation technique that can yield a depth of customer-specific understanding along with valuable resources such as proprietary data, competitor intelligence, and IT ecosystem details. However, finding suitable participants and managing a co-development program can be so time-consuming that product teams may find they can only practically co-develop with a single customer. Unfortunately, such intense exposure to a single customer tends to amplify the risk of building for a market of one.

How then can product teams achieve the degree of understanding that comes from highly engaged co-development without sacrificing the market perspective that comes from exposure to many customer voices? This was the challenge the Streets product team at Teralytics faced that led us to the constellation of mixed methods we describe in this article.

Co-Development with a cohort of several customers yields deeper insights than short-term, large sample research methods and broader validation than possible when co-developing with only one customer. Photo by the author.
Co-Development with a cohort of several customers yields deeper insights than short-term, large sample research methods and broader validation than possible when co-developing with only one customer. Photo by the author.

OBJECTIVES & APPROACH

Teralytics applies machine learning and heuristic models to anonymized mobile data from millions of telecom subscribers to provide our customers with an understanding of human mobility patterns in their regions.

In the summer of 2020, signals from our market suggested that customers in our road planning segment needed to assign vehicle volumes to individual road segments. These signals were the impetus for a new product effort we called ‘Streets’.

Our initial target market consisted of a few hundred transportation planning departments in small-to-medium-sized cities in Germany. These entities follow complex processes and rely on highly accurate (ground-truth-calibrated) data – conditions that benefit from the depth of insight typical of co-development. However, as many of our customers are constrained by public budgets, we also needed an approach that would help us develop a core product that would meet the needs of most customers without requiring expensive customization with each deal.

During our MVP Test, we drew upon the perspectives of dozens of customers while the later two phases of Re-discovery and Co-development involved four customers. Photo by the author.
During our MVP Test, we drew upon the perspectives of dozens of customers while the later two phases of Re-discovery and Co-development involved four customers. Photo by the author.

The process we describe here spanned the entire go-to-market lifecycle from customer and product discovery all the way to the launch and delivery of a functional product to revenue customers. This process consisted of well-established methodologies that we applied across three phases: MVP Test, Re-discovery, and Co-development.

PHASE 1: MVP TEST

To sidestep the difficulty of finding customers willing to volunteer to participate in hours of research, we offered a discount on the eventual Streets product as an incentive for taking part in research sessions over the course of its development. Measuring the appeal of this value exchange of our customers’ money and time in return for a future product delivery was the test we would use to assess the viability of our MVP hypothesis.

"Look at charging a nominal fee to partner customers to participate . . . this changes the participation/motivation dynamic from something that you want from them, to being a privilege to participate in."

  • Board of Innovation

1.1 DEVELOPING OUR MVP HYPOTHESIS TEST

The overarching hypothesis we hoped to test with our MVP was that customers with traffic use cases were willing to pay for road-level insights derived from mobile data. We documented this and other key assumptions in a hypothesis management exercise we internally refer to as ‘mythbusters‘, wherein we log key hypotheses and then determine experiment types and success criteria for each.

Mythbusters worksheet captures hypotheses, experiment types, success metrics and gathered evidence. Photo by the author.
Mythbusters worksheet captures hypotheses, experiment types, success metrics and gathered evidence. Photo by the author.

The test we chose for our product hypothesis was the attempt to sell an MVP ahead of development. We decided that convincing three to five customers (2–3% of our total available market) to pre-pay for the future delivery of our product could be considered a sufficient indication of traction. Since later phases of our process would involve qualitative methods, our choice of this specific threshold was influenced by Jacob Nielsen’s argument that three to five homogeneous participants is the most efficient sample size for qualitative research.

Mason – CPO:

The daunting question of ‘where to even begin’ has stumped my teams in past new product efforts, but the mythbusters process of itemizing our beliefs and then developing experiments to test them was a great way to cut through that initial paralysis and quickly begin validating.

Jobelle – UX Design Manager:

In previous organizations that I’ve joined, product and technical discovery typically happens in silos and ideas are stored in various documents or Confluence pages. The benefit of mythbusters is that it allows for a structured format of generating hypotheses from all voices in the team, building consensus.

1.2 BUILDING THE MVP

With a list of feature hypotheses prioritized by value (by design and product) and technical feasibility and risk (by engineers), we were now in a position to begin scoping our MVP offering, but we first had to decide on the type of MVP.

Types of MVP ranging from no product to minimal product - Streets used 'sell before you build' approach. Photo by the author.
Types of MVP ranging from no product to minimal product – Streets used ‘sell before you build’ approach. Photo by the author.

Lean practitioners prescribe a variety of MVP flavors that can range from light expressions of a value proposition all the way to releases of first functionality. Our dual objectives of customer discovery and eventual co-development compelled us to stay as close to a ‘no product’ MVP as possible, while still expressing a concept tangible enough to actually inspire a purchase. Fortunately, solutions in our industry tend to be sold as custom projects ahead of delivery, so a finished product is not typically necessary to be able to start selling.

Our 'sell before you build' MVP consisted of a pitch deck, a clickable demo, a demo script, a product specification and a customer-facing evaluation agreement. Photo by the author.
Our ‘sell before you build’ MVP consisted of a pitch deck, a clickable demo, a demo script, a product specification and a customer-facing evaluation agreement. Photo by the author.

To enable our sales team to communicate our proposition, we created a pitch deck complete with pricing, a product spec document detailing core features, a clickable demo, as well as a demo script. Our call-to-action was for customers to pay 5% of the standard price up front as an entry fee to join a ‘beta’ program. Customers that remained engaged throughout the development process could then apply that fee to an optional purchase of a discounted annual subscription.

Mason – CPO:

In an ideal world, we would have done extensive discovery research before crafting a solution hypothesis but it’s difficult to access customers for research in our space. As a result, everything we were concepting at this point was a guess. Hypotheses are fine, but Jobelle is so good at her work that even our most dubious ideas looked visually stunning. I leaned a bit too far into creating a compelling proposition to the extent that our concept was ‘too real’, making it harder to convince customers to critique our feature hypotheses later on.

Jobelle – UX Design Manager:

How would one demonstrate product possibilities with an open invitation for co-creation? Indeed, the proof of concept that the sales team showed had a lot of features in a polished prototype, and it may have signalled, "Hey this is a cool, finished product." Anyone who would try to sell a product co-development with a proof of concept might want to consider reducing the number of proposed ideas and features, not just the design fidelity.

Thibaut – Senior Transport Modeling Specialist:

From the engineering perspective, building the MVP was all about assessing the feasibility of the product capabilities proposed by design and product. I had just started at the company and teamed up with an engineer who had been at the company for 6 years. Overall, the combination worked well: my fresh eyes allowed me to contribute new ideas, while his experience let him quickly identify potential risks and resources related to our MVP scope.

1.3 CHOOSING PARTICIPANTS

Pitch materials in hand, we now needed a lead list so we could start selling. It was important that our method of selecting prospects would yield a sample that would also be compatible with our qualitative research phases. We ultimately settled on what researchers call ‘Criterion-i Sampling‘, a frequently-used non-probability ‘purposive sampling’ method that relies on predetermined criteria to define the sample frame.

Conveniently, this technique looks a lot like the sales practice of establishing an Ideal Customer Profile (ICP). As a result, our sample frame and lead list included companies within our initial target segment – specifically, transportation planning organizations in German cities of between 50,000 and 150,000 inhabitants. The first five companies that signed the pre-purchase agreement were included in the discovery and validation research activities that followed.

Mason – CPO:

In hindsight, selecting customers with which to engage in co-development over several months based only on their fit to the target persona and willingness to purchase might have been too low of a bar. These criteria don’t do much to assess a customer’s cultural compatibility with innovation. Early on, some customers seemed uncomfortable engaging in open-ended discovery, expecting a much more linear product development process.

1.4 MEASURING TRACTION

We used two measurements to evaluate the reception of our MVP: conversions and objections.

The formal conversion step of signing a purchase agreement was a strong validation of our value proposition since it involved a partial financial commitment ahead of product delivery. This exercise had the ancillary benefit of exposing any procurement impediments early on, thereby streamlining the eventual full license purchase. The agreement step was also an opportunity to clarify topics like IP ownership while setting the stage for open-ended innovation by limiting expectations around scope or schedule.

Engaging prospects with the aim of converting them while simultaneously attempting to learn from them was a delicate balance that required a different approach than standard sales of post-product/market fit products. To that end, we encouraged our sales reps not to try to ‘sell through’ objections but rather to listen and explore ‘why’.

"If the meeting is too much about sales, then you’re really narrowing the conversation by asking prospects to react to a solution. The meeting is no longer about finding the best opportunity . . . it’s about convincing the prospect that your broad value proposition makes sense."

Étienne Garbugli, LeanB2B

Along the way, we scheduled weekly debrief sessions with the sales team to discuss their findings with the intent to iterate or pivot our proposition based on the reaction from the field. To help us identify patterns in our findings, we categorized the objections we received, allowing us to visualize sources of friction in our pitch through charts like the one below .

Pitch retrospectives helped us quantify the type and frequency of comments and objections that surfaced during initial pitches of our MVP. Photo by the author.
Pitch retrospectives helped us quantify the type and frequency of comments and objections that surfaced during initial pitches of our MVP. Photo by the author.

Ultimately, our reps hit our threshold of five customers with intent to purchase, without the need to revise our pitch. However, negotiations with one customer ultimately stalled due to a contract term unrelated to our value proposition; thus we ended up beginning the second phase of our process with four customers.

Mason – CPO:

Getting our reps to approach this as a learning exercise and regularly participate in the debrief sessions was difficult. For the most part I’d say our reps approached this as a typical sale. That’s perhaps understandable because, with the exception of occasional prodding by the CEO, we didn’t establish any special incentives to encourage them to treat this differently – this ‘learning activity’ was ultimately a diversion from their sales targets.

PHASE 2: RE-DISCOVERY

The solid traction we achieved with our MVP was a signal that our market valued these capabilities, but we didn’t yet know ‘why’. We still needed to understand the user goals our proposition supported, how important these use cases were to our users’ work, where this fit into their workflow, and the ecosystem of suppliers and solutions peripheral to our product.

2.1 JOURNEY MAPPING

We set out to answer these questions in our program kickoff meetings. The format of these meetings was a four-hour (virtual) workshop with three to six users and decision makers representing each customer. To fulfill the customer discovery objective of the workshop, we first needed participants to essentially ‘forget’ about the MVP we had pitched them and instead tell us more about themselves and their work.

The process of creating a journey map forces conversation and an aligned mental model for the whole team . . . This shared vision is a critical goal of journey mapping."

  • Sarah Gibbons, Nielsen Norman Group

At the heart of the kickoff workshop was a journey mapping exercise, a technique that draws heavily on Design Thinking concepts. Here, we asked participants to generate a ‘long list’ of the most important activities and objectives in their role. From this long list, participants chose one objective to deep dive on, walking us through the steps, timeline, people and problems involved in accomplishing that objective.

The Journey Map we captured in Miro during each workshop featured a timeline, sequence of steps, key participants and challenges for one objective selected by the participants. Photo by the author.
The Journey Map we captured in Miro during each workshop featured a timeline, sequence of steps, key participants and challenges for one objective selected by the participants. Photo by the author.

Mason – CPO:

The first 10 minutes of these workshops were tense! We had to walk participants back from expectations set during the sales process that the product was almost done. Despite our messages, the high fidelity of our demo gave prospects the impression that the product was more complete than it actually was. Once we reset expectations, the rest of the workshop was great. In a matter of hours, the journey map helped us pinpoint areas in our customers’ most important workflows where Streets could add real value.

Jobelle – UX Design Manager:

The participants were primarily German speakers and I can’t understate how important it was to have a native-language facilitator that could pick up the nuances in the language and allow for a natural conversation flow. In the first workshop, the status quo journey map was carried out from the POV of the participants’ typical workflow, rather than a recounting of actual events. We changed this in subsequent workshops. The resulting emphasis on what actually happened in specific cases triggered much more meaningful discussions.

Nick – Senior Product Manager:

During the workshop, participants expressed high frustration with their status quo due to lack of data and tools. This clarified their motivation for participating in the program and for investing in the potential Streets product. The workshops seemed to allow customers to reflect on their work in a safe setting, which triggered some dialogue between participants about their true priorities and challenges. These discussions offered us a candid and authentic view into our customers’ problems.

2.2 RESCOPING OUR SOLUTION

After 16 hours of workshops with 13 participants from four customers, we had a deep enough understanding of our users’ personas, objectives, pain points and context to be able to re-define our product hypothesis around this context.

Some of our original hypotheses were validated in the workshop. For example, our belief that small cities lacked the digital tools to respond to and make decisions regarding their citizens’ changing travel behaviors turned out to be true. On the other hand, one important hypothesis that we debunked was the idea that real-time traffic was a requirement for this user group.

After synthesizing our findings, we updated our initial task flow, omitting superfluous steps and focusing on what we now understood to be the core flow. From this core, we derived a new foundational scope which would serve as the basis for our co-development.

Nick – Senior Product Manager:

It was clear that the customers had already been thinking about how the new product could potentially help them prior to the initial workshops, so they came armed with a list of requirements. When we worked with them to define their most important use cases, they all ended up giving the highest priority to traffic volumes, even for public transport-related use cases. Additions to the ‘core’ product definitions arrived at key moments when customers responded enthusiastically to design and content features when able to see demos and test prototypes by touching them. With two key exceptions, those additions were significant refinements of as opposed to major shifts from the initial ‘core’.

Jobelle – UX Design Manager:

After synthesizing the rich information we’d gathered during the workshops we better understood customers’ recurring needs and goals. This gave us the focus we needed for the next iteration. I felt like a sculptor chipping away at a stone. Some of the initial mythbusters hypotheses had been proven, while others proved to be unnecessary for our product. Our team wasn’t defensive about keeping capabilities that didn’t show support in our workshop. That’s really what this process is supposed to do, and it was great to see it work. We used our learning to simplify our task flows and generally re-imagine our offering.

Thibaut – Senior Transport Modeling Specialist:

During this phase, Nick did a great job of not only listening to customers, but also to the team. For instance, learning that customers were not interested in traffic speeds was a welcome discovery because we were not sure we could generate those accurately. On the other hand, one of the features that received the most enthusiastic reception from the customers (that we call ‘spider analysis’) was not explicitly mentioned by the customers, but rather proposed by the team when we learned more about our customers’ objectives. We thus decided to build a first prototype that was presented later in our interactive workshops.

PHASE 3: CO-DEVELOPMENT

Although it might seem like an elaborate term for the obvious notion of collaborating with customers, Customer Co-Development (or co-creation) is a proper theoretical space with ongoing research, a growing body of literature, diverse methodologies, and private sector consultancies.

Related to but distinct from other customer-centric approaches like Design Thinking, co-development tends to feature longer, more intensive interactions with the same subjects, as well as more shared involvement in solution development than the former.

"While design thinking contributes to create value through in-depth user observation, co-creation aims at creating value through user interaction."

— Hemmonet-Goujot, Fabbri, Manceau 2016

Co-development canon typically presumes joint innovation with a single lead customer, a tenet we departed from by working with four customers simultaneously. Our approach insulated us from the risk of building a one-off, since individual requests that might have sent us down the wrong path could instead be gut-checked in near real time with the needs of our other program participants, constantly keeping us in the ‘goldilocks’ zone.

3.1 SOLUTION ITERATIONS

Within co-development, it’s equally valid to be ‘customer-led’ as it is to be ‘customer validated’ but suppliers should carefully consider which approach is the best fit. Because of the extensive domain, design, and engineering skills in our Streets product team, we were confident that we were in a good position to lead solution development. By taking the lead we could frame the solution space towards feasibility and financial viability, within which the combination of our in-house design competence and customer involvement ensured high levels of creativity.

Every few weeks we would hold review iterations where we would lead by presenting solution concepts of increasing fidelity, and customers would then respond and ideate on the basis of those concepts. We continued in this way for over two months, designing and developing while validating after each iteration, until we arrived at our first fully functional product.

A screenshot of the intermediate front-end prototype with a small set of realistic data, which generated meaningful responses during usability tests. Photo by the author.
A screenshot of the intermediate front-end prototype with a small set of realistic data, which generated meaningful responses during usability tests. Photo by the author.

A key milestone in this journey was the delivery of our first ‘clickable’ experience. This allowed us to start testing usability while further validating value and our solution implementation. With this release, customers could actually log in and test our implementation of their use cases. We recorded these 1:1 sessions with each individual and refined our approach based on feedback patterns we observed across users.

Nick – Senior Product Manager:

As we increased the fidelity of our concepts with each iteration, we received much more specific and actionable feedback.. For example, one of the use cases we built out was for ‘selected link route analysis and display’. We demonstrated it in the context of an interactive session. While it was a well-known type of analysis in the transport planning field, it was new to the customers, and they were instantly excited about using it to answer many of their use-case related work questions. In this way, we validated this as an important part of the core product.

Of all the steps in the process, iterating with four customers was the most time consuming. We tested combining multiple customers into a single session but found that it didn’t save time. It was clearly important for us to capture feedback from each individual participant and the customers also had the expectation that their specific feedback would be heard, as opposed to only participating in a group setting.

Jobelle – UX Design Manager:

During one iteration, I emailed Marvel prototypes to customers for feedback, but this asynchronous method didn’t yield much of a response. From then on, I only shared prototypes and asked for feedback during our live sessions. In hindsight, we shouldn’t have expected participants to engage in co-creation outside the established meeting cadence.

When we presented high-fidelity mockups during usability sessions in order to further validate the use cases and tasks, participant feedback focused instead on the ‘fakeness’ of the data and that "it looks nice." This was frustrating, as these sessions didn’t generate quality feedback on the experience. It was only later, once we’d developed a test application with a small set of real data, that participants would really engage with the concepts. In these sessions, they elaborated on what they would do with the data, whether the data seemed plausible, and other potential use cases they could accomplish with this product.

Thibaut – Senior Transport Modeling Specialist:

The format of the workshops allowed us to take a ‘quick and dirty’ approach to prototyping: in some cases, our visualization software was running on the machine of the engineer who built it, and selecting links happened through the browser’s console: the only person who could actually operate it was the person who built it. Only once we saw the positive reactions did the work of integrating a polished version of the visualization in the actual UI start.

The constant interaction with the customers was great as a source of feedback on the needs for the computation of the data. However, once we started showing the customers data specific to their area (which was good both to keep them engaged and help us evaluate the ‘face validity’ of our first results), a pressure to deliver new data versions emerged, and somewhat interfered with the research aspect of the development. Delivering regular updates to the customers before we had properly automated the various steps of our computation pipeline proved time consuming, and we had to debug our data sources while the approach was being refined and tested.

3.2 CALIBRATION AND QUALITY

One of the most valuable benefits of co-developing with customers as partners is the access they are willing to provide to both proprietary data and real-world test conditions, along with their general expertise.

We approached our first calibration step by leveraging our customers’ existing knowledge of the general state of traffic in their region. Here, we provided customers with our overview of the volumes on their networks so they could respond with their high-level ‘impression’ of our product’s accuracy. Then, drilling deeper, we provided some spot location comparisons to traffic counts and, while walking them through use cases, asked them whether the content looked plausible.

Based on our learning, we iterated our methodology several times, reviewing comparisons with traffic counts in increasing detail. Along the way, several customers provided ground truth data sets in addition to what was already publicly available, which we used to tune our models.

Nick – Senior Product Manager:

It’s important to note that the term ‘ground truth’ is misleading, as even the customers pointed out that available traffic data is imperfect. One customer mentioned that they hoped our data could actually be considered ground truth and show where their limited count or survey information is incorrect. Despite the imperfect nature of each individual data set, having access to multiple proprietary data sources helped us further validate and tune our models.

Thibaut – Senior Transport Modeling Specialist:

Our grand vision was to be able to provide data for new customers ‘at the click of a button’, in a completely automated fashion without reliance on customer data or the need for manual calibration. As such, we used the ground truth data (both publicly available and provided by the customers) only to check the quality of the results, but not to calibrate parameters. Being able to achieve a reasonable fit to this data in only a few iterations, without any extensive parameter-tuning, gave us confidence in the viability of our approach at scale.

It’s also worth noting that a lot of the ‘available’ data was not readily usable. In particular, traffic counts are often provided as maps that need to be manually converted to a machine-readable format, with further manual mapping needed to associate counting stations to the network definition. Only a small fraction of that data could be used during the co-development phase due to the lack of time to properly format it.

3.3 DELIVERY AND LAUNCH

According to our engagement agreement, the delivery of the first functional version of the product would trigger a purchase decision milestone. At this point, based on what we had jointly defined and developed, customers needed to decide whether to opt in to the discounted product subscription or forgo their pre-payment.

Our agreement also included non-binding language requesting customers who were satisfied with the product to provide an endorsement that we could quote publicly. While this was optional, all the participants in the program offered to do so, and these quotes have been included in our launch collateral.

"Customer participation seems to favor an enhanced perception of the firm, a willingness to recommend the firm to others, as well as a higher purchase intent and willingness to pay."

  • Schreier, Fuchs and Dahl, 2012

Empirical research into co-development suggests that the willingness of our participants to publicly endorse Streets was not unusual. Along with an enhanced perception of the company, collaborating on product innovation tends to prime customers for an eventual purchase. While this is an intuitive concept, it is also good instruction for anyone considering the costs and benefits of this approach.

Nick – Senior Product Manager:

The purchase milestone was tough to navigate because it was technically a sales close that a rep should manage, but it was a judgement call on my part to decide when the product was ‘ready enough’ to ask for a commitment. Up to that point we had been collaborating as peers on a research basis, so it was tricky to pivot back to a commercial conversation. The response varied across the beta customers: some considered they had already ‘bought into’ the product development trajectory and had reserved a budget, while others had changing demands on their time and budget during the course of the development program. In general, the response can be characterized as positive and commercially successful.

Jobelle – UX Design Manager:

What does it mean for design to be ready for launch? In our case, a synthesis of high-priority usability issues that have been logged and worked on, and having a front-end that adheres to our Figma designs and our design system. With the front-end team, we worked through JIRA design tickets in frequent Slack and Zoom calls for clarification. Everyone in the team feels then happy with the output. Design is never done, but there’s always the backlog for lower-priority issues.

Thibaut – Senior Transport Modeling Specialist:

A great aspect of the co-development process was that we only needed to deliver the absolute minimal feature set for the customers to be willing to purchase. In particular, discussions with the customers made it clear that they didn’t need frequent data updates to start getting value out of the product, which allowed us to release without building automatic update infrastructure. Not having quality reporting and known issues with the data production resolved ahead of release made us nervous, but the shared understanding that this was a first version and that critical updates would quickly follow allowed everyone to rest easier.

SUMMARY

After evaluating the first functional version of our product and understanding the roadmap of what we had earmarked for coming versions, all the customers involved in our program opted to subscribe and continue using the product.

From a program that began with a high-level value proposition hypothesis, within five months Teralytics was generating revenue from a product that had already been sold multiple times before launch. The momentum the product achieved out of the gate was a direct result of having been developed using otherwise hard-to-access validation data that proved invaluable to product quality, as well as its pre-launch development, purchase, and endorsement by bellwether customers.

Because we validated our value proposition and co-developed with multiple customers, we are confident this momentum will continue, and that a broader share of the market will find that our solution meets their needs than would otherwise have been the case.

RECOMMENDATIONS

Although our approach to the development of Streets was successful, it was nonetheless imperfect. We’ve learned a great deal from this process, and we think others may be able to benefit from our experience. To that end, we offer the following recommendations to anyone developing a similar B2B proposition:

  1. Charge for participation: Co-development can be a long, often multi-month process. Customers who are financially invested are more likely to stay engaged for the duration.
  2. Develop hypotheses systematically: By cataloguing all your beliefs and assumptions about the product opportunity in one place, you can start to chip away at the high uncertainty that is typical of new product opportunities.
  3. Get product involved in pitches: The best way for product team members to learn from the field is for them to actually join sales meetings.
  4. Align sales incentives: While learning before building is hugely important to a company overall, this importance should be carried down to how sales reps are incentivized to recruit co-development partners.
  5. Test your value prop with low-fidelity concepts: During an MVP test, low-fidelity wireframes will send the right message about product maturity while maintaining a focus on the core proposition rather than on a polished UI.
  6. Find out ‘why’: If your value proposition test is successful, don’t take that fact alone as a green light to build. Hold follow-ups to pinpoint the capabilities users valued and which objectives they serve. Use that context to re-imagine your product scope before building.
  7. Calibrate prototyping for data-centric products: General design research practice tends to focus on the fidelity of the user experience, first validating with wireframes and elaborating up. However, for data-centric products, it’s important to also consider when to increase the fidelity of the data in your prototype. It’s likely you’ll need to validate with higher fidelity data sooner than you’ll need a polished UI.
  8. Co-develop with three-to-five customers: Beyond avoiding the risk of building a one-off, if your product benefits from calibration or training data, having access to data from multiple customers will surely improve the initial quality of your product.
  9. Clarify IP ownership: Preserve the goodwill you’re likely to earn during co-development by getting ahead of any ambiguity about ownership of what is ultimately developed.
  10. Collaborate on launch: Since research shows that co-development partners are naturally inclined to purchase and become champions of your product, make sure you don’t force an end to your collaboration as soon as the engineering is done. If your customers are willing to lend their credibility to your go-to-market efforts, let them!

References

Building the co-creative enterprise. (2010, October 1). Harvard Business Review. https://hbr.org/2010/10/building-the-co-creative-enterprise

Garbugli, E. (2014). Lean B2B: Build products businesses want. Createspace Independent Publishing Platform.

Hemonnet-Goujot, A., Fabbri, J., & Manceau, D. (2013). Design Thinking vs Co-Création : Une étude comparative de deux méthodes d ‘ innovation.

Journey Mapping 101. (n.d.). Nngroup.Com. Retrieved September 23, 2021, from https://www.nngroup.com/articles/journey-mapping-101/

Merryweather, E. (2020, June 12). The difference: Prototype vs MVP. Productschool.Com. https://productschool.com/blog/product-management-2/difference-prototype-mvp/

Oinonen, M. (2014). Customer Involvement In Co-development in B2B Markets: Literature Review on Key Contributions and Success Factors.

Palinkas, L. A., Horwitz, S. M., Green, C. A., Wisdom, J. P., Duan, N., & Hoagwood, K. (2015). Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Administration and Policy in Mental Health, 42(5), 533–544.

Pinder, M. (2019, January 30). B2B Co-creation: 50 key learnings and insights – Board of Innovation. Boardofinnovation.Com. https://www.boardofinnovation.com/blog/b2b-co-creation-50-key-learnings-and-insights/

Reshetilo, K. (2021, June 15). What Type of MVP is Right for Your Startup? – Greenice. Retrieved September 23, 2021, from https://greenice.net/type-mvp-right-startup/

Schreier, M., Fuchs, C., & Dahl, D. W. (2012). The innovation effect of user design: Exploring consumers’ innovation perceptions of firms selling products designed by users. Journal of Marketing, 76(5), 18–32.

The challenges of b2b market research and how to overcome them. (2019, June 5). Businesscasestudies.Co.Uk. https://businesscasestudies.co.uk/the-challenges-of-b2b-market-research-and-how-to-overcome-them/

The framework for ideal customer profile development. (n.d.). Gartner.Com. Retrieved September 23, 2021, from https://www.gartner.com/en/articles/the-framework-for-ideal-customer-profile-development

Why you only need to test with 5 users. (n.d.). Nngroup.Com. Retrieved September 23, 2021, from https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/


Related Articles