Deceit pays dividends: How CEO lies can boost stock ratings and fool even respected financial analysts

Deceit pays dividends: How CEO lies can boost stock ratings and fool even respected financial analysts

The multibillion-dollar collapse of FTX – the high-profile cryptocurrency exchange whose founder now awaits trial on fraud charges – serves as a stark reminder of the perils of deception in the financial world.

The lies from FTX founder Sam Bankman-Fried date back to the company’s very beginning, prosecutors say. He lied to customers and investors alike, it is claimed, as part of what U.S. Attorney Damian Williams has called “one of the biggest financial frauds in American history.”

Think

How were so many people apparently fooled?

A new study in the Strategic Management Journal sheds some light on the issue. In it, my colleagues and I found that even professional financial analysts fall for CEO lies – and that the best-respected analysts might be the most gullible.

Financial analysts give expert advice to help companies and investors make money. They predict how much a company will earn and suggest whether to buy or sell its stock. By guiding money into good investments, they help not just individual businesses but the entire economy grow.

But while financial analysts are paid for their advice, they aren’t oracles. As a management professor, I wondered how often they get duped by lying executives – so my colleagues and I used machine learning to find out. We developed an algorithm, trained on S&P 1500 earnings call transcripts from 2008 to 2016, that can reliably detect deception 84% of the time. Specifically, the algorithm identifies distinct linguistic patterns that occur when an individual is lying.

Our results were striking. We found that analysts were far more likely to give “buy” or “strong buy” recommendations after listening to deceptive CEOs – by nearly 28 percentage points, on average – rather than their more honest counterparts.

We also found that highly esteemed analysts fell for CEO lies more often than their lesser-known counterparts did. In fact, those named “all-star” analysts by trade publisher Institutional Investor were 5.3 percentage points more likely to upgrade habitually dishonest CEOs than their less-celebrated counterparts.

Although we applied this technology to gain insight into this corner of finance for an academic study, its broader use raises a number of challenging ethical questions around using AI to measure psychological constructs.

Biased toward believing

It seems counterintuitive: Why would professional givers of financial advice consistently fall for lying executives? And why would the most reputable advisers seem to have the worst results?

These findings reflect the natural human tendency to assume that others are being honest – what’s known as the “truth bias.” Thanks to this habit of mind, analysts are just as susceptible to lies as anyone else.

What’s more, we found that elevated status fosters a stronger truth bias. First, “all-star” analysts often gain a sense of overconfidence and entitlement as they rise in prestige. They start to believe they’re less likely to be deceived, leading them to take CEOs at face value. Second, these analysts tend to have closer relationships with CEOs, which studies show can increase the truth bias. This makes them even more prone to deception.

Given this vulnerability, businesses may want to reevaluate the credibility of “all-star” designations. Our research also underscores the importance of accountability in governance and the need for strong institutional systems to counter individual biases.

An AI ‘lie detector’?

The tool we developed for this study could have applications well beyond the world of business. We validated the algorithm using fraudulent transcripts, retracted articles in medical journals and deceptive YouTube videos. It could easily be deployed in different contexts.

It’s important to note that the tool doesn’t directly measure deception; it identifies language patterns associated with lying. This means that even though it’s highly accurate, it’s susceptible to both false positives and negatives – and false allegations of dishonesty in particular could have devastating consequences.

What’s more, tools like this struggle to distinguish socially beneficial “white lies” – which foster a sense of community and emotional well-being – from more serious lies. Flagging all deceptions indiscriminately could disrupt complex social dynamics, leading to unintended consequences.

These issues would need to be addressed before this type of technology is adopted widely. But that future is closer than many might realize: Companies in fields such as investing, security and insurance are already starting to use it.

Big questions remain

The widespread use of AI to catch lies would have profound social implications – most notably, by making it harder for the powerful to lie without consequence.

That might sound like an unambiguously good thing. But while the technology offers undeniable advantages, such as early detection of threats or fraud, it could also usher in a perilous transparency culture. In such a world, thoughts and emotions could become subject to measurement and judgment, eroding the sanctuary of mental privacy.

This study also raises ethical questions about using AI to measure psychological characteristics, particularly where privacy and consent are concerned. Unlike traditional deception research, which relies on human subjects who consent to be studied, this AI model operates covertly, detecting nuanced linguistic patterns without a speaker’s knowledge.

The implications are staggering. For instance, in this study, we developed a second machine learning model to gauge the level of suspicion in a speaker’s tone. Imagine a world where social scientists can create tools to assess any facet of your psychology, applying them without your consent. Not too appealing, is it?

As we enter a new era of AI, advanced psychometric tools offer both promise and peril. These technologies could revolutionize business by providing unprecedented insights into human psychology. They could also violate people’s rights and destabilize society in surprising and disturbing ways. The decisions we make today – about ethics, oversight and responsible use – will set the course for years to come.

FAQs

How did FTX foundеr Sam Bankman-Friеd dеcеivе customеrs and invеstors, lеading to a high-profilе financial fraud casе?

FTX foundеr Sam Bankman-Friеd allеgеdly еngagеd in dеcеption from thе company’s incеption, lying to both customеrs and invеstors. This dеcеption has lеd to onе of thе largеst financial fraud casеs in Amеrican history, according to U.S. Attornеy Damian Williams.

What did thе study in thе Stratеgic Managеmеnt Journal rеvеal about thе suscеptibility of financial analysts to CEO liеs?

Thе study found that financial analysts, еvеn еxpеriеncеd onеs, arе suscеptiblе to CEO liеs. Thеy tеnd to givе morе positivе rеcommеndations, likе “buy” or “strong buy,” aftеr listеning to dеcеptivе CEOs, and highly еstееmеd analysts arе еvеn morе pronе to falling for such liеs.

How did thе algorithm dеvеlopеd for thе study idеntify dеcеption in CEOs?

Thе algorithm usеd linguistic pattеrns idеntifiеd in S&P 1500 еarnings call transcripts from 2008 to 2016 to dеtеct dеcеption. It achiеvеd an 84% accuracy ratе in idеntifying dеcеption by rеcognizing distinct languagе pattеrns associatеd with lying.

Why wеrе highly еstееmеd financial analysts morе likеly to bе dеcеivеd by CEOs, according to thе study?

Thе study suggеsts that highly еstееmеd analysts may dеvеlop a strongеr truth bias and ovеrconfidеncе as thеy gain prеstigе. Thеy also tеnd to havе closеr rеlationships with CEOs, which can incrеasе thеir suscеptibility to dеcеption.

What arе thе potеntial applications and limitations of thе AI tool dеvеlopеd for this study?

Thе AI tool idеntifiеs languagе pattеrns associatеd with lying but doеs not dirеctly mеasurе dеcеption. It could havе applications bеyond businеss, including idеntifying fraud in various contеxts. Howеvеr, it strugglеs to distinguish bеtwееn harmlеss “whitе liеs” and sеrious dеcеption, and еthical and privacy concеrns nееd to bе addrеssеd bеforе widеsprеad adoption.