The Golden AI Glacier: Rethinking Roger’s Bell Curve for Healthcare

Eric Luellen
Towards Data Science
13 min readJan 9, 2019

--

“One reason why there is so much interest in the diffusion of innovations is because getting a new idea adopted, even when it has obvious advantages, is often very difficult,” said Everett Rogers, ostensibly the pioneer on the topic, in introduction to the 3rd edition of his seminal work, Diffusions of Innovation, published in 1983 (Rogers, 1983). As Dr. Rogers noted, this idea was not original to him; it has been part of the human condition for centuries. No less an observer than Niccolò Machiavelli in his letters in 1513, which would later become his classic, The Prince, observed 470 years earlier that:

“[t]here is nothing more difficult to plan, more doubtful of success, nor more dangerous to manage than the creation of a new order of things…Whenever his enemies have occasion to attack the innovator they do so with the passion of partisans, while the others defend him sluggishly, so that the innovator and his party alike are vulnerable” (Machiavelli, 1532).

Thereafter, the adoption or diffusion of innovations was notably lamented by the British Navy in 1747, American inventor and founding father Ben Franklin in 1781, French judge and lay scientist Gabriel Tarde in 1903, British anthropologists Edward Gifford and Alfred Kroeber in 1937, researchers Bryce Ryan and Neal Gross in 1943, and at least 1,953 authors published in peer-reviewed journals in the 21-year gap between 1941 and 1962 (Rogers, 1983) (see Figure 1).

Dr. Rogers defined “diffusion” as:

“…the process by which an innovation is communicated through certain channels over time among the members of a social system; [i]t is a special type of communication, in that the messages are concerned with new ideas” (Rogers, 1983).

This newness inherently involves uncertainty. In this context, uncertainty involves perceptions about alternatives to new ideas, and comparative probabilities about the efficacy of these alternatives, including the status quo (Rogers, 1983). In modern times, many innovations are technology, which Rogers goes on to define as: “a design for instrumental action that reduces the uncertainty in the cause-effect relationship involved in achieving a desired outcome” (Rogers, 1983). Therefore, technological innovations create uncertainty in the perceptions of prospective adopters about its efficacy relative to alternatives and, at the same time, represent an opportunity to reduce uncertainty by the application of faster and more accurate cause-effect associations (Rogers, 1983). One could then reasonably argue that diffusions of technology are about the second derivative of uncertainty — the uncertainty in the perception of prospective users as to whether the technology will reduce uncertainty.

The modern theory of diffusion of innovations was originally based upon the adoption of new methodologies for farming and home economics in the 1950s, the foundation upon which Rogers generalized the theory and applied it to technologies involving hardware and software beginning in the 1960s (Beal, 1957). It is the process by which these uncertainties about uncertainty, perceived in the minds of adopters, are magnified or reduced because of approaches, cultures, and the nature of the adopters and their topical focus areas that determines the pace of innovation diffusion or technology adoption. These factors, manifested into the policies by which organizations are managed, determine when the raisons d’être of organizations — ranging from the military to manufacturers to healthcare — deliver new capabilities or not, and when.

Roger’s Bell Curve

Rogers hypothesized that under the diffusions of innovation theory, technology is adopted at a pace that can be graphed as a normalized Gaussian distribution — or a “bell curve” — on x-y axes first familiarized with the Cartesian coordinate system. Therein, Rogers found and showed that adopters were divided into five segments depending upon where they fell in this adoption chronology. The earliest adopters were “innovators” and represented 2.5% of a market. The second chronological adopters were “early adopters” who represented 13.5% of a market. “Early majority” adopters came third, representing 34.5% of a market. “Late majority” adopters represented another 34.5% of a market, coming fourth in sequence. And, “laggards” represented the final 16% of the market (Rogers, 2003)(see Figure 2). Moreover, Rogers hypothesized that each category of adopters went through four cognitive stages: (1) awareness; (2) decision to adopt or reject; (3) initial use; and, (4) continued use; and, the five factors that impacted adopters most in their decision steps were: (i) relative advantage; (ii) compatibility; (iii) complexity; (iv) trialability; and, (v) observability (LaMorte, 2018).

While this level of understanding as to the steps in the process of technology adoption and its causations has been successful in many disciplines, it also contains elements that become shortcomings in healthcare and public health (LaMorte, 2018). Specifically, because the model originated outside the fields of healthcare and public health, it: (a) fails to include a participatory approach typically required in healthcare to secure buy-in from the “Six P’s:” patients, providers, payers, pharmaceutical manufacturers, purveyors and policy makers; (b) applies more to the adoption of behaviors than the cessation of behaviors, which is a major issue because in modern healthcare technologies, most innovations are replacing an existing technology; and, (c) it fails to consider organization’s or adopters’ resources, social, and peer support for adopting the new technology (LaMorte, 2018).

Evolving Roger’s Bell Curve

While all academic and conceptual theories are endlessly tweaked and adjusted by new hypotheses and findings, between 1962 and 2015, there have been five major evolutions to the bell curve that Rogers propagated as a model for innovation diffusion and technology adoption. The first major evolution relevant here was the technology S-curve, initiated by Richard Foster in 1986 and applied more generally by Clayton Christensen in 1997 in his seminal book The Innovator’s Dilemma (Foster, 1986) (Christensen, 1997). Foster deduced that technological innovation could be graphed with cost and/or time on the x-axis and the progress of technological performance on the y-axis wherein the curve or line for new technologies was always some form of an “S,” with the induction time of new technologies (“research and development”) being the base, payback or return on investment via adoption was the vertical, and market saturation and obsolescence was the top of the “S” (Foster, 1986). Second, Christensen, among other things, noted that these “S” curves were attached in a series of waves (see Figure 3) wherein key determinants of their success was: (a) the timeframe in which organizations entered the curve as to not be out-innovated by more prescient competitors; and, (b) their ability to continually innovate without disruption to keep these “S” waves going over a long term (Christensen, 1997). Third, Christensen went on to point out the two key causes of whether a technological innovation was adopted, and the speed at which it was adopted or rejected; it had to do with the adopters’ relative need and resources. If the status quo met prospective adopters’ needs within their available resources, they stuck with the status quo and delayed or rejected the innovation. Similarly, if a technological innovation was not within adopters’ given resources, regardless of perceived need in some cases, they also stuck with the status quo and rejected or delayed the innovation. These causations are key to healthcare adoption of AI and similar newer technologies (Christensen, 2015).

The next major evolution from the diffusion-of-innovation based technology adoption lifecycle, which occurred third chronology but is being presented fourth for cohesion and clarity here, is about gaps or chasms. In 1991, Geoffrey Moore observed in Crossing the Chasm that a significant cohort of the technological innovations went through the induction/research, and development phase, were welcomed and used by early adopters, but for a plethora of reasons, were never adopted more broadly by the market (see Figure 4) (Moore, 1991).

Oversimplified, Moore argues that there is a chasm between early and majority adopters because they have materially different psychographic profiles as to how and why they make decisions. Innovators and early adopters have a pro-adoption bias because of their built-in appreciation for new capabilities; they are biased toward liking, wanting, and adopting them. Whereas, the 68% of the market that constitutes its early and late majorities, focus more on practicality — the kinds that Christensen wrote about with needs and resources. This market majority are also are skeptics, often from experience, knowing that the vast majority of new technological innovations never go far or do not last (Moore, 1991). The late majority adopters, an equal portion to the early majority, are different again in that they lack confidence in their ability to implement organizational change (Moore, 1991). To overcome these skeptics and differentiate that a technology is here to stay requires, according to Moore, massive amounts of education, marketing, and relationship building, which in turn requires staying power, which in turn requires capital — more capital than most firms have or can raise, creating a “valley of death” of technology innovation start-ups (see Figure 5) (Moore, 1991).

Fifth and finally, from 1998–2008, Carl May and colleagues proposed the Normalization Process Theory (NPT) to evolve prior models and help explain innovation diffusion and technology adoption lifecycles in healthcare (May, 2009). NPT is concerned with three core problems related to technology adoption in healthcare settings: (1) implementation — the social process of bringing new actions into practice; (2) embedding — the incorporation of these new practices into habits and routines; and, (3) integration — the process by which new practices are reproduced and sustained organization wide (May, 2009). NPT postulates that: (A) practices are embedded and become routines because of a collective effort of individuals working together for enaction; (B) enactment “is promoted or inhibited through the operation of generative mechanisms (coherence, cognitive participation, collective action, reflexive monitoring) through which human agency is expressed;” and, (C) reproducing practices organization-wide requires continuous championing and investments by ensembles of change agents in an organization (May, 2009).

The AI Adoption Glacier in Healthcare

Beyond the buzz that artificial intelligence will disrupt healthcare to transform it from reactive to predictive and proactive with personalized medicine extending our lives by decades, is the reality, if one talks to experienced and well-publicized digital health entrepreneurs, that wide-scale or timely adoption of the tools to fulfill this promise is still largely hyperbole. Despite an estimated $12 billion of private investment in digital health companies in 2017 — many of them related to AI — few to any have had the type of blockbuster success that would justify a private-equity investment (Yock, 2018).

The explanation as to why artificial intelligence adoption has been glacial in healthcare, despite the extraordinarily better outcomes it promises in an area critical to humanity, appears to be five-fold. One, the explanation given by health technologists is centered on the idea that most digital health and AI startups have followed the wrong model, one that was successful with consumers and products in other industries but ignores the fundamental differences in healthcare (Yock, 2018). The tested and proven technology startup strategy for other industries focuses on quickly getting a minimum viable product to market then iterating new versions and releases based on the feature and function set that proves successful with early end-users (Yock, 2018). This strategy ostensibly ignores the complexity of stakeholders, risk aversion, and regulatory climate of the healthcare industry (Yock, 2018).

Second, the “valley of death,” as described by Moore, is longer and deeper in healthcare as a result of the longer adoption cycles. Startups must survive longer and conduct more marketing and prospect education, which requires more capital, to successfully overcome the extra hurdles described by Yock. Moreover, often technologists in data science, AI, and the cutting-edge fields in which startups form are in high demand in other industries such as financial technologies or consumer products. Therefore, it is more expensive to keep this highly competitive talent around for years while healthcare slowly adopts new technologies.

Third, we must revisit the diffusion-of-innovation theory because it appears technologists focusing in healthcare have become overly reliant on its reductionist evolutions and overlooked its original caveats. First, we can look to the elements of innovation as defined by Rogers: (1) relative advantage; (2) compatibility; (3) complexity; and, (4) trialability (Rogers, 1983). In each of these areas, AI in healthcare is problematic. AI is often incompatible with existing systems, policies, and processes such that they would need to be replaced. Moreover, AI is infamously complex and beyond the knowledge, and sometimes understanding, of many users; as such, they are loathing to accept what they cannot trust, and they cannot trust what they cannot understand. Moreover yet, AI is troublesome to trial on many healthcare questions because they involve a critical area that impacts the well-being of humans, which is high-risk and touches upon numerous ethical issues.

Second, as Rogers points out, change agents who champion innovations and those whose social buy-in must be achieved are often heterophilous — meaning that they cluster by similarity in groups, each of which are quite different than other groups. Therefore, the change agents are often more technologically advanced than the users, thereby creating a bias against effectively understanding each other in communication.

Third, and perhaps most pragmatic and impactful for AI startups in healthcare, Rogers noted in his work the import of scientific validation of innovations (Rogers, 1983). In healthcare, this means clinical trials; however, there are few, if any, widely accepted standards for software clinical trials such as there are with drug trials. Moreover, and to the point, most trials are extremely expensive and academic medical institutions that could perform them (e.g., Massachusetts General Hospital, etc.) view them as a method by which to leverage their “seal of approval” to secure additional revenue, all of which adds to the enormous depth and width of Moore’s “valley of death.” In short, there is a dearth of funding available to pay for software trials in AI applications in healthcare. As a result, the vast majority of innovations are never scientifically validated, and many of those that might make it to early adoption are proven scientifically flawed such that skepticism and the chasm before majority adoption widens — and along with it, the “valley of death” in technology adoption lifecycles.

Fourth, recall Christensen’s waves of S-curves (Figure 4). Even if and when an AI startup in healthcare is able to overcome these significant obstacles, it is only one adoption cycle, or the first S-curve in what must be a wave of S-curves if they are to sustain themselves. One result is that a significant portion of the companies that make it through this competitive gauntlet of AI adoption in healthcare must do so repeatedly with new innovations, else they risk becoming a much shorter-lived one-product company (a.k.a. “the one-trick pony”).

Finally, we are drawn to time scales, whether AI in healthcare is truly disruptive, and what disruption really means. Christensen showed that the lifecycle of technology maturity in a market is most often 15–20 years (Brown, 2006). This period is not truly disruptive, it is transformative. If we examine other transformative technologies such as electronic mail and the Internet, we anecdotally validate Christensen because decades passed between their invention and their widespread usage. For AI startups in healthcare that are already facing extra-long and deep chasms in the technology adoption lifecycle, and the competitive need to continually innovate, this duration of transformation greatly magnifies the “valley of death” for each innovation.

A key to resolving this plethora of challenges faced by AI startups in healthcare may rest in Christensen’s corrective definition of disruption. Disruption, Christensen argues, is misleading when it is applied to new technological capabilities in products or services (Christensen, 2015). Instead, according to Christensen, disruption is a process — a process wherein disrupters begin with a small-scale experiment at the low-end or edges of a market (the “fringe”) and focus on how needs are changing and evolving over a long period of time to form new business models (Christensen, 2015). The disrupters find a new model to meeting nascent and evolving customers’ needs and the transformation of complete substitution or replacement of one technology for another, still often takes decades (Christensen, 2015). However, the startup is then relieved from many of the competitive pressures of continuous innovation because they are not viewed as central or a threat by competition, have lower costs to enable surviving extra large “valleys of death,” and by having early customers giving feedback are able to address many of the structural idiosyncrasies in technology adoption.

References

Beal, G.M., Rogers, E.M., Bohlen, J.M. (1957). Validity of the concept of stages in the adoption process. Rural Sociology, 22(2):166–168.

Brown, D. (2006). Target selection and pharma industry productivity: what can we learn from technology S-curve theory? Current opinion in drug discovery & development, 9(4):414–8.

Christensen, C. (1997). The innovator’s dilemma: When new technologies cause great firms to fail. Boston: Harvard Business School Press.

Christensen, C., Raynor, M., McDonald, R. (2015, December). What Is Disruptive Innovation? Harvard Business Review, pp. https://hbr.org/2015/12/what-is-disruptive-innovation.

Foster, R. (1986). Innovation: The attacker’s advantage. New York: Summit Books.

LaMorte, W. (2018, August 29). Behavioral change models: Diffusion of innovation theory. Retrieved from Boston University School of Public Health: http://sphweb.bumc.bu.edu/otlt/MPH-Modules/SB/BehavioralChangeTheories/BehavioralChangeTheories4.html

Machiavelli, N. (1532). De Principatibus (Of Principalities) (aka The prince).Italy: Antonio Blado d’Asola.

May, C., Mair, F., Finch, T., MacFarlane, A., Dowrick, C., Treweek, S., & Rapley, T., et al. (2009). Development of a theory of implementation and integration: Normalization process theory. Implementation Science, 4:29.

Moore, G. (1991). Crossing the Chasm. New York: HarperCollins.

Rogers, E. (1983). Diffusion of innovations (3rd Ed.). New York: The Free PRess.

Rogers, E. (2003). Diffusion of Innovations (5th Ed.). New York: Simon and Schuster.

Yock, P. (2018, October 17). Why do digital health startups keep failing? Fast Company, pp. https://www.fastcompany.com/90251795/why-do-digital-health-startups-keep-failing.

This story is published in The Startup, Medium’s largest entrepreneurship publication followed by +409,714 people.

Subscribe to receive our top stories here.

--

--