Smart Entrepreneurs – Dumb Mistakes

Disoriented businessman with blindfold covering his eyes  isolated on white

[This article was originally posted on LinkedIn November 7, 2014]

My last post considered whether scientists were deluded and answered, ‘no’. In this post, I’m going to describe a situation where a very smart entrepreneur (with deep technical training) actually was deluded, as a result of confirmation bias. I do this because, in my experience, this is a common problem among technical entrepreneurs. To some extent, it’s necessary to believe in your product even if others do not. But, that does not mean that you should blindfold yourself.

As a reminder, confirmation bias is our tendency to focus on data that supports a pre-existing conclusion, and to discount or ignore data that contradicts it. A recent prominent example of this was the rant on CNN by weather channel founder John Coleman where he asserted that scientists find evidence of global warming because it secures them funding. Coleman is focusing on a hypothetical rational (which is true in some circumstances) for behavior that he disagrees with, rather than the actual data supporting global warming, because he doesn’t want to admit that it is actually happening.

The situation

In the case at hand, I was invited by a technical entrepreneur to participate in a new technology-based venture. I won’t go into detail on the technology, but in general terms it proposed to use quantitative measurement to augment a more subjective technique (relying on human inspection) to find and characterize flaws in a substance. The goal of the technology is to (a) detect more of the flaws earlier in the process than possible by simple inspection, and (b) obtain more specific information about which flaws needed to be remedied (reduce false positives) to avoid unnecessary and costly work to confirm whether the flaw was serious, and correct it if so.

This is a reasonably common thing to attempt, and when looking at the technology, there are a couple of obvious things to check. First, the technology must act with sufficient precision to differentiate the serious flaws from similar but harmless cosmetic flaws. Second, the populations of serious flaws and cosmetic flaws need to be sufficiently distinct that a measurement of sufficient precision can reliably distinguish between them. For this, three are three (somewhat obvious) cases:

Naturally, as part of coming up to speed on this technology, these two points were the first things on my mind. Taking part of a day to read some relevant literature, I focused on understanding the population of flaws.

I was helped in this by prior work on similar approaches. The prior work required more instrumentation, and was tied to a different type of non-destructive evaluation, etc., so it was not serious competition. But, it did provide a real-world case study. In a study of 120 observed flaws, 35 were found by full assay to be serious. And, in that study it developed that quantitative measurements of the target property in the population found overlap. With an arbitrary mid-point threshold, 4 serious flaws were rated as cosmetic (false negatives) and 3 cosmetic flaws were rated as serious (false positives). This meant that the underlying population of flaws had the third of the three distributions shown above. There would be no value of the property for which, regardless of the precision used, both false positives and false negatives could be avoided. The assay would be problematic no matter how good the technology.

Since the value proposition hinged upon the ability to avoid false positives yet (as it turns out) false negatives represented unacceptable risks—if actual flaws went undetected, the outcome could be catastrophic—I had to report to the entrepreneur that the prospects for the proposed approach were poor. The only way to reduce false positives would be to allow an increase in unacceptable false negatives. It was not a fault of the technology, the problem itself was such that this type of measurement was not suitable for the constraints (minimize false positives, permit no false negatives).

This is where it got interesting. The entrepreneur was unimpressed, and instead of acknowledging any issue focused on two subsidiary points: The proposed technique was more sensitive (detected more flaws, of both types) than the human inspection, and tended to create fewer false negatives than the human inspection. The latter occurring simply because the technique detected more flaws overall. Admittedly, this would be good; more flaws detected = more serious flaws found and remedied, avoiding catastrophic outcomes. However, that was only part of the value proposition. The method would still pass through a significant number of false positives. So, at least half of the value proposition was now eliminated.

As a further point, it turns out that another, more comparable (though still distinct) competing technology of similar complexity and cost had been introduced 4 years ago. That assay also increased detection of flaws overall (and therefor also reduced false negatives). At the same time, it did differentiate well between cosmetic and serious flaws in the flaws it did detect (i.e. it also passed through false positives). In a good news/bad news scenario, this technology had found only limited traction. To me, what was important was that this showed that the value proposition of fewer false negatives, without elimination of false positives, was not compelling to users. The entrepreneur’s new technology would need, but did not have, a substantial advantage over the earlier.

Again, the entrepreneur was unmoved, citing increased precision and detail over the prior entry. When I pointed out that superior precision was not relevant, because the populations of cosmetic and serious flaws overlapped, the entrepreneur returned to the point that the system delivered fewer false negatives vs. the human inspection.

At this point I stopped, realizing that the entrepreneur’s reasoning was becoming circular. The entrepreneur insisted on looking at local advantages: greater detection and therefore fewer false negatives vs. simple inspection and more precision vs. the prior instrumental method. These supported the entrepreneur’s thesis. The entrepreneur dismissed the combined facts that the nature of the underlying populations prevented a reduction in false positives and that another established technology reduced false negatives yet still had little traction—facts that together destroyed the value proposition. These were dismissed because they contradicted the current venture thesis. The entrepreneur was a victim of confirmation bias.

Conclusion

This entrepreneur was making a ‘dumb’ mistake that may cost several years of effort, and perhaps significant sums of investor’s money, yet was not stupid. In fact, this entrepreneur is exceptionally smart. Unfortunately, in this case, the main contributions of their intellect were clever ways to rationalize how data supported a pre-existing conclusion; in other words, confirmation bias.

There are several ways to fight this. The first is to recognize that confirmation bias exists. Seek out critics and listen to them. Remember that you are likely making assumptions. Make them explicit and find ways to test them. Failing that, go to the target customers and users, as early and frequently in the process as possible. Give them honest examples of each of the possible approaches, and get their feedback. Users will not automatically subscribe to your pre-existing conclusion, and will view the situation more objectively. Or, at least, from their viewpoint, which is what counts.