MVPs
Professor's Notes

Defining MVPs

Over the last decade, scholars and entrepreneurship practitioners have offered a number of definitions, each with varying degrees of specificity. In The Lean Startup, Ries (2011: 77) defines the MVP as “A version of a new product, which allows a team to collect the maximum amount of validated learning about customers with the least effort.” Blank (2013) also emphasized the theme of minimal effort by noting that MVPs ideally are composed of just those features (and no more) that allow the product to be deployed. York and Danes (2014) built on these definitions by specifying the importance of user involvement in the MVP. They define the MVP as “a set of ‘minimal requirements,’ which meet the needs of the core group of early adopters or users.” Camuffo and colleagues (2020: 566) describe the MVP as “a preliminary basic version of the offering with just enough features to let customers experience it and assess their willingness to pay for it.” Martins Pacheco and colleagues (2021) identify MVPs as products that enable “teams to test their ideas with limited resources early on.” Prominent in these definitions is the emphasis active learning with potential customers, but notably there is no mention of the need for transparency, an important point to which we will return later.


The definitions presented above are varied but many share a common thread around several ideas including minimum features, efficiency, external deployment and testing user experiences. We propose a synthesized definition that incorporates each of these aspects. Moreover, this definition is explicit about the central purposes of the MVP. Specifically, we contend that the MVP is designed and deployed with two precise purposes in mind—to learn via experimentation under uncertainty and to demonstrate to potential stakeholders an approximation of what future products could become. These primary purposes, we argue, is what differentiates MVPs from other representations such as technological prototypes or functional product/service releases that have a profit-motive as their central purpose. The prior literature has positioned prototypes primarily as internal testing tools, often used by engineering teams for the purposes of technological testing or for internal evaluations; what Song and Montoyo-Weiss (2001: 77) refer to as “in house sample product testing.” Davila (2000: 389) notes prototypes are used “to assure manufacturability” (p. 389) and that there is a “direct relationship between prototypes and investment.” Rothaermel and Deeds (2004) contend that the ideal outcome of the exploration phase is the development of a patentable prototype. By contrast, we position the MVP as deployable with relatively little capital expenditures and as integral to the initial exploration phase. Unlike preceding conceptualizations of prototypes, our definition of the MVP is distinct due to the market-based learning property and an external social construction property (e.g., it is intended to demonstrate an approximation of what a future version of the product could become to external parties, rather than for internal testing, patenting, or technological development). Taken together, we define the Minimum Viable Product (MVP) as:


A tangible product or service representation with a limited number of features deployed for the purposes of learning about the value of a potential solution via experimentation and demonstrating to potential stakeholders an approximation of what the product could become.


By tangible we mean that the MVP exist as concrete instantiations of a concept (cf. Berglund et al., 2020) that can be perceived by at least one of the senses (i.e., the MVP can be seen, heard, or felt, etc.). Merriam-Webster (2021) defines tangible as easily seen, recognized, or capable of being perceived. MVPs have only a limited number of features; by design are minimally developed products. This implies that entrepreneurs must therefore make strategic decisions about what features are necessary to include and exclude in their MVPs. MVPs only become useful to a firm when they are deployed externally to test some aspect of the MVP; they cannot be useful if they live solely under lock and key in the mind of the entrepreneur. Consistent with the science of the artificial perspective, MVPs can be classified as artifacts that exist at the interface between the venture and the external environment (Simon, 1996). That is, they are deployed for an explicit purpose at this interface, i.e., for learning via experimentation (cf. McDonald & Eisenhardt, 2020; Zuzul & Tripsas, 2019). By value of a potential solution, we mean that entrepreneurs are concerned with measuring a products’ worth for some set of potential use cases (Rindova & Petkova, 2007), and/or uncovering customers willingness to pay for a potential solution (Camuffo et al., 2020). MVPs are future oriented—they are used to test for potential market opportunities, in accordance with the understanding that there is a high degree of uncertainty of the value of a potential solution a priori (cf. Dimov, 2016; Knight, 1921). Finally, we contend that MVPs are used by entrepreneurs to demonstrate to potential stakeholders an approximation of what the product could become. This is a key distinguishing feature of the MVP relative to previous conceptualizations of prototypes in the literature as noted above.


The lack of a unifying MVP definition in prior research has resulted in a dearth of research specifically focusing on MVP experimentation. A scholarly understanding of MVPs is critical for future research (Shepherd & Gruber, 2021); indeed MVP experimentation is an antecedent to many important entrepreneurial decisions, including pivot decisions (e.g., Grimes, 2018), resourcing decisions (e.g., Navis & Glynn, 2011), launching and scaling (e.g., Desantola & Gulati, 2017), and competitive positioning (e.g., Rindova & Courtney, 2020). Thus, a clear definition as well as an understanding of variance and boundaries of the construct are important for future scholarly work on the antecedents of such entrepreneurial decisions. In the following section, we turn to explaining how MVPs vary in the features that represent aspects of their tangible form and fidelity. By
form we mean concrete instantiations (e.g., tangible representations or models that exist empirically). By fidelity, we mean the degree to which the MVP is comprehensive relative to its anticipated final form along aesthetic, functional and symbolic dimensions.

MVP Forms and Manifestations

As Figure 1 suggests, MVPs fit between the testability threshold and the exploitation threshold. MVP forms that are displayed above the testability threshold indicate that they deployable for experimentation. The upper bound of the MVP space is referred to as the exploitation threshold (we describe this threshold at the end of this section).

The most basic form of entrepreneurial artifact that has been described in the previous literature is the thought experiment (Shepherd & Gruber, 2021). A thought experiment is an abstract hypothetical scenario that lies within mind of its creators (Folger & Turillo, 1999). As it relates to entrepreneurship, thought experiments are at the highest level of abstraction, and can be used to imagine how new products could be deployed to meet unmet market needs. Indeed, the ability to imagine new possible scenarios is an important skill which can influence idea quantity and quality (Kier & McMullen, 2018). However, the inner thoughts of entrepreneurs are not spoken; in many cases entrepreneurs may not even be able to articulate such thoughts (Cornelissen & Clarke, 2010). Moreover, the thought experiment alone cannot furnish new empirical observations (Haggqvist, 1998). Given our definition of MVPs as tangible substantiations deployed for the purpose of learning, thought experiment fall outside the boundaries of the MVP. We acknowledge that a thought experiment could certainly have internal value for an entrepreneur and is representative of an entrepreneurial artifact; however, without tangible representation it is not testable.

Just above the testability threshold, lies the napkin sketch. The founders of Southwest airlines famously developed a “napkin sketch” of their idea for an airline that would transit from Dallas, Houston, and San Antonio, Texas. The sketch mapped out the initial business model to use as a point of internal discussion and fulfilled the goal of communicating the idea, however napkin sketches are limited in their testability as they are essentially a basic drawing or illustration of an idea. However, it is possible that entrepreneurs could still conduct relatively weak testing with a napkin sketch, although it is likely that such actions could have negative consequences for perceptions of the venture’s legitimacy and reputation (a critical point we will return to later in our proposition 2). Yet, the MVP in sketch form is still tangible as it can be shared and understood by others even though it is not a three-dimensional object.

One level up from the napkin sketch, there are several MVP manifestations that fall into the zone of passive interaction – where users testing is possible but largely a passive experience. The explainer video is an MVP form that is used to visually display a potential solution. Before developing the full product, Dropbox co-founder Drew Houston developed an explainer video to convey to potential users how the product would allow for seamless syncing across devices. Dropbox was proposing to build what at the time of launch was a very complex product; one that required multifaceted integration across operating systems, management of large files over slow internet connections, and handling of file conflicts. The explainer video, although non-functional, helped Mr. Houston ensure that people were interested in the product and enabled Dropbox’s waiting list for the beta product to go from 5,000 people to 75,000 people almost overnight. Nonetheless, while viewing the explainer video potential users could not actually interact with the MVP itself.[1] Other MVP forms in the zone of passive interaction include the wireframe diagram, the live product demo, and the 2D mockup.

At the next level up, we find the zone of dynamic user interaction. A commonly used form here is the landing page. A landing page is a basic webpage that displays a visual representation of a product or service to a potential customer. In some cases, the landing page is used to gather initial customer acquisition estimates or to gather basic contact information from potential customers. A slightly more deceptive form of the landing page presents customers with a “buy now” button. Other manifestations in the zone of dynamic user interactions include the clickable web/mobile app, the 3D physical product, and the crowdfunding campaign.


Finally, in the zone of simulated experiences we find several forms. The Wizard of Oz MVP is used to create a simulated customer experience using a combination of technology and manual workarounds. In a recent New York Times article, Eric Reis, the founder of the lean startup movement discussed an “app where you could take a photo of food and it would tell you how many calories were in it. They said it was driven by proprietary technology. But they were really just using people hired to look at the images” and manually estimate the calories (Kessler, 2021: 2). This hybrid tech-human arrangement exemplifies the Wizard of Oz MVP, but also reveals that implicit in this type of MVP is deception. Reis responded, “Customers generally don’t especially care how the technology works as long as it accomplishes their goal… this happens very often.” Entrepreneurs who use the Wizard of Oz MVP attempt to put users in a fully immersive front-end experience without vision or knowledge of what exactly is happening “behind the curtain”, hence the name Wizard of Oz. If well designed, users might (falsely) believe that the technology or back-end that supports the product is fully functional. The concierge MVP is very similar to the Wizard of Oz MVP, however rather than relying on deception, users have a view of what takes place “behind the curtain”; continuing the above analogy. With the Concierge MVP the back-end including individuals providing manual services remain visible and transparent to the customer (Bland & Osterwalder, 2019). Entrepreneurs often rely on these two forms of MVPs when the full product requires an extensive technology build out. As such, a reliance on manual processes that don’t require an initial investment might be a necessary first step to test user experiences. As Hoffman explains, “sometimes in order to scale, you have to first do things that don’t scale.” Other forms in the zone of simulated experiences include “pop up” physical tests and function heavy inventions/innovations (see Table 1 for more details).

The upper bound of the MVP space is referred to as the exploitation threshold. March’s (1991) seminal work distinguished two distinct organizational modes, exploration and exploitation. Exploration involves search, variation, risk taking, experimentation, play, flexibility, discovery and the pursuit of new knowledge; whereas the exploitation phase is focused on production, efficiency, implementation, execution, refinement and the use of things already known (Levinthal & March, 1993; March, 1991). Theoretically, once a firm reaches the exploitation threshold, the driving motivation of the firm shifts from an experimentation motive to a primarily exploitation-motive. At this point product artifacts deployed by the firm no longer fit the definition of the MVP, even though additional learning is likely to occur; consistent with prior exploitation research that specifies a shift to routinized learning and the tendency to institutionalize reliable behaviors into routines during the exploitation phase (e.g., Laureiro-Martínez, Brusoni, Canessa, & Zollo, 2015).


Of course, in practice it is not uncommon for entrepreneurs to combine several MVP manifestations. For example, sometimes Explainer Videos are displayed directly on Landing Pages, Crowdfunding Campaigns might display 2D Mockups, the Wizard of Oz MVP might rely on a Clickable “no code” App to facilitate interactions with users, and so on. Nonetheless our discussion and representation of discrete MVP forms and the boundaries of MVP space provides a conceptual foundation upon which the rest of our work (and future MVP research) can be built. Such definitional and concept clarity is an overdue contribution that is needed in the literature (Shepherd & Gruber, 2021). In the next section, we continue our foundational conceptual development of the MVP construct by presenting three important dimensions of MVP fidelity.

MVP Fidelity Dimensions

MVP fidelity means the degree to which the MVP is comprehensive relative to a full feature set along a particular dimension. We posit that there are at least three critical dimensions of MVP fidelity; functional fidelity, aesthetic fidelity and symbolic fidelity. Although variation along these dimensions has seldom been discussed in the literature, as we describe herein such variation is likely to significantly influence the MVP’s effectiveness for learning during experimentation. We unpack each fidelity dimension below.


Aesthetic fidelity
relates to features that appeal to the senses, most commonly through visual representations (“looks like”). This includes aspects such as the color, size and symmetry. MVPs high in aesthetic fidelity express visual representations of what the final product could be. However, in many cases the final product might be quite visually different from then the original MVP. Dropbox’s explainer video, which we described earlier is an example of an MVP that prioritizes aesthetic fidelity over other aspects. The primary aesthetic consideration relates to how the MVP visually appears to users (cf. Creusen & Schoormans, 2005; Rindova & Petkova, 2007).

Functional fidelity relates to features that are pragmatic, enabling users to accomplish a task or goal. They are tangible only in instrumental form (Creusen & Schoormans, 2005; Eisenman, 2013). Functional features can include the particular aspects of a chair that enable sitting, characteristics of a doorknob for opening, or the prongs on a fork for eating (Norman, 1988). One way to think about functional fidelity is to consider the degree to which a feature allows a customer to accomplish a functional goal, such does it actually work? (“works like”). An example of an MVP that was built around high functional fidelity was the Falcon 1 from SpaceX (see Berger, 2021). When testing this MVP, SpaceX founder Elon Musk was more concerned about discovering if his small team could design and combine novel components to launch a rocket in a much more cost effective way than any rocket that came before it. During this phase of MVP testing, SpaceX was not very concerned with aesthetic features (i..e, how the rocket looked to stakeholders).

Symbolic fidelity relates to features that evoke mental representations. This could include cultural or procedural meanings associated with the product beyond its functional and aesthetic features. Such cultural meanings are associated with established categories and institutions that allow people to make sense of new artifacts in relation to social structures (Hargadon & Douglas, 2001). Moreover, symbolic features might evoke meanings associated with the personal identities of potential customers as well as the identity of the venture (Creusen & Schoormans, 2005; Rafaeli & Vilnai-Yavetz, 2004). Symbolic fidelity might also include aspects of design that allow the user to become more familiar with the roles or routines serviced by the product.

Due to their power in eliciting cultural resonance, symbolic features can prompt the user to consider experiences, procedures, norms or social contexts (i.e., “feels like” or “reminds me of”). For example, to suggest that the Fyre Festival would be an extremely exclusive luxury experience, the creators of the Fyre Festival used an enigmatic orange tile on Instagram which linked to a video of supermodels and other social media influencers reveling in the Exuma Islands (See Appendix B). The Fyre Festival was purportedly to be held at Pablo Escobar’s former private island and billed as ‘the greatest party of all time’; symbolizing to potential attendees that it was once in a lifetime opportunity and leading some to pay up to $250,000 to secure tickets sight unseen (Kreps, 2017). In this way, a high level of symbolic fidelity can induce a fear of missing out (see Przybylski, Murayama, Dehaan, & Gladwell, 2013). We graphically illustrate the three dimensions of fidelity in Figure 2.

Notes on using High & Low Fidelity MVPs

In order to test your assumptions that we’ve talked about in the previous lessons, you need to choose and use the right type of minimum viable product (MVP), a version of a new product that allows collecting the maximum amount of validated learning about customers with the least amount of effort. Generally speaking, a MVP includes just a few essential features that allow you to ship a product to early adopters and get some feedback from them to check your hypotheses. Getting paid for your MVP or product pre-order is one of the best indicators for validating your business idea.


Different Criteria for MVP

There are different types of MVPs used for different purposes in different cases. In addition to forms and fidelity identified above, MVP testing could differ in the following ways:

Coverage is the number of customers reached. An interview is done with few people, while an AdWords campaign can reach thousands.

Feedback time defines how much time is needed to prepare for the experiment and get results. Conducting an A/B test of the landing page will give you results of the experiment much faster than creating two different product prototypes and showing them to potential users.

Reliability of results defines the possible margin of error in the results of your experiments. If you show a hand-drawn mockup of a product, it won’t give the same impression to your potential user as a video demonstrating all its features and benefits, so there might be a significant difference in customer behavior after interacting with different MVPs.

Choosing between High- and Low-Fidelity MVP

The fidelity of the MVP mostly depends on how much effort is needed to make the MVP look as similar as the final product. It’s probably the most important criteria for startups building their MVP because it influences how fast you can start your experiments and how reliable your results will be (you must find the balance between fast feedback and high reliability).

For instance, you might decide to create a landing page to test if there would be a demand without having a physical product. Normally, it would be considered a low-fidelity prototype because it’s just a landing page, not the real product. But what if you create a landing page for your experiment that looks like your actual sales page in the future when you have the real product? Even though you don’t have a product yet, a potential customer will see exactly the same website as they would see when you’re actually selling the product.

Categorizing MVP types into low versus high fidelity is always relative to the real product. It’s much more important to understand when it’s more practical to use low- and high-fidelity MPV regardless of their label.

Low-fidelity MVPs are typically used to:

• better understand the problem and related issues

• check how important the problem is for customers

• get confirmation if the problem is worth solving

• understand what kind of solution would be most welcomed

High-fidelity MVPs help to:

• determine how much customers are willing to pay for the solution

• find early adopters and evangelists

• optimize various aspects of marketing strategy

• identify most potential viral growth techniques

While choosing the type of MVP for your particular hypothesis verification, you should consider:

• What is the biggest risk you have right now, and how could you check it?

• How much time do you have to build this MVP and get results from it?

• How much money do you have at this stage? What amount would be smart to use? Don’t plan anything fancy for your first tests!

• What makes the most sense in your case? Which of the hypothesis validation strategies would bring your startup to the next level?

Key Takeaway

Different hypotheses require a different type of MVP. Don’t overload your MVP with unnecessary features and details—you are seeking validated learning. Therefore, run planned experiments with MVPs as close to the real product as possible, yet make them easy and inexpensive to build.