Theranos CEO Elizabeth Holmes was a persuasive promoter. She convinced many presumedly intelligent people that Theranos had developed technology that could take a few drops of blood from a finger prick to test for a myriad of diseases. The Theranos hype turned out to be just another dot on Silicon Valley’s “Fake-it-Till-You-Make-it” BS spectrum. Last January, Holmes was convicted of wire fraud and conspiracy to commit fraud.
Theranos is not unique, although successful criminal prosecutions are rare. As the pitch-person mantra goes: “We don’t sell products; we sell dreams. Too often, investors are seduced by products and technologies they don’t understand. The mysterious complexity only adds to the appeal: “If we don’t understand them, they must be really smart.”
For several years, the center of the dream universe has been artificial intelligence, which Sundar Pichai, Alphabet’s GOOG,
CEO, likened to humanity’s harnessing of fire and electricity. The Association of National Advertisers selected “AI” as the marketing word of the year in 2017.
AI is really good at performing tightly defined tasks that require prodigious memory and fast calculations, but fragile and unreliable at tasks that require more than identifying statistical patterns in test data. Thus, machine learning pioneer Andrew Ng warned that “those of us who work in machine learning are really good at doing well on a set of tests, but unfortunately deploying a system takes more than do well on a set of tests. »
The real world is messy and AI algorithms struggle with messy data and complex goals. In the game Go, for example, the rules and the objective are clear and the AI algorithms can defeat the best human players. If, however, the rules were changed or the goals could not be quantified, the algorithms would break down. If, for example, the 19 by 19 GB grid used today were replaced by the 17 by 17 board used centuries ago, human experts would still play expertly, but AI algorithms trained on a board 19 by 19 would be hopeless. If the goal was to create an aesthetic stone pattern, the AI algorithms would have no idea.
Similarly, large language models (LLMs) are currently all the rage because they are surprisingly good at mimicking human conversation and answering simple factual questions that have correct/incorrect answers. However, they often struggle to answer questions involving opinions about unfamiliar situations. Consider these exchanges the author had with OpenAI’s GPT-3 LLM, which has 450 gigabytes of input data and 175 billion parameters:
Gary: Is it safe to walk up stairs on my hands if I wash them first?
GPT-3: Yes, it is safe to climb stairs on your hands if you wash them first.
Gary: Should fans wear night vision goggles when the World Series is played at night?
GPT-3: There is no right or wrong answer to this question, it is simply a matter of preference. Some fans may feel more comfortable wearing night vision goggles during night games, while others may not feel the need.
Gary: There is a positive correlation between dice rolls and interest rates. If the dice rolls go up, should I buy a cup of coffee?
GPT-3: It’s hard to answer without more information. Generally speaking, if there is a positive correlation between dice rolls and interest rates, it may be a good idea to buy a cup of coffee. However, it’s always best to speak with a financial advisor for the most accurate advice.
Real world challenges
Unfortunately, most potential real-world uses of AI involve evolving situations with unclear goals. For example, shortly after IBM’s Watson defeated the top human players in the Jeopardy game, IBM IBM,
boasted that Watson would revolutionize healthcare: “Watson can read every health text in the world in seconds, and that’s our first priority, to create a ‘Dr. Watson, if you will.
Without any real understanding of the meaning of the words, Watson was a big flop. IBM spent more than $15 billion on Watson without any peer-reviewed evidence that it improved patient health outcomes. IBM’s internal documents identified “multiple examples of dangerous and incorrect treatment recommendations.” After more than a year of searching for buyers, IBM sold the data and some algorithms to a private investment firm last January for around $1 billion.
Another example: An insurance company with the quirky name of Lemonade LMND,
was founded in 2015 and went public on July 2, 2020, with its stock price closing at $69.41, more than double its IPO price of $29. On January 22, 2021, the shares reached a high of $183.26.
What was the buzz? Lemonade sets its insurance rates by using an AI algorithm to analyze user responses to 13 questions posed by an AI chatbot. CEO and co-founder Daniel Schreiber argued that “AI crushes humans in chess, for example, because it uses algorithms that no human could create and none fully understand” and, of similarly, “algorithms we can’t understand can make insurance fairer.
How does Lemonade know that its algorithm is “remarkably predictive” when the company has only been around for a few years? They don’t. Lemonade’s losses have increased each quarter and its stock now trades at less than $20 per share.
Lily: Once richly valued, “unicorn” startups are gored and investors and backers have stopped believing in them
Need more proof? AI robotaxis have been touted for over a decade. In 2016, Waymo CEO John Krafcik said the technical issues had been resolved: “Our cars can now handle the toughest driving tasks, such as detecting and responding to emergency vehicles, controlling multi-lane four-way stops and the anticipation of what the unpredictable humans will do.” do on the road.
Six years later, robotaxis still occasionally go rogue and often rely on human assistance in cars or from a distance. Waymo has burned through billions of dollars and is still largely confined to places like Chandler, Arizona, where there are wide, well-marked roads, light traffic, few pedestrians — and tiny incomes.
Drones are another AI dream. On May 4, 2022, AngelList Talent Newsletter gushed that “drones are reshaping the way business is done in a dizzying array of industries. They are used to deliver pizza and vital medical supplies, monitor forest health, and catch discharged rocket boosters, to name a few. These are all, in fact, experimental projects still grappling with basic issues including noise pollution, invasion of privacy, bird attacks and the use of drones for target practice.
These are just a few examples of the reality that startups are too often funded by dreams that turn out to be nightmares. We recall Apple, Amazon.com, Google and other great IPO successes and forget thousands of failures.
Recent data (May 25, 2022) from University of Florida finance professor Jay Ritter (“Mr. IPO”) shows that 58.5% of the 8,603 IPOs issued between 1975 and 2018 had negative returns on three years, and 36.9% lost more than 50% of their value. Only 39 IPOs have produced the over 1000% returns investors crave. The three-year average IPO return was 17.1 percentage points lower than the broader US market. Buying shares of well-managed companies at reasonable prices has been and will remain the best strategy for sleeping on both ears.
Jeffrey Lee Funk is an independent technology consultant and former college professor specializing in the economics of new technologies. Gary N. Smith is the Fletcher Jones Professor of Economics at Pomona College. He is the author of “The AI Delusion” (Oxford, 2018), co-author (with Jay Cordes) of “The 9 Pitfalls of Data Science” (Oxford 2019) and author of “The Phantom Pattern Problem” ( Oxford 2020).
After: This venture capital firm thrived during the dot-com crash. What he is doing now.
Read also : Meta takes another subtle step towards a much-hyped metaverse