Serious investment thinking that doesn’t take itself too seriously.

HOME

LOGIN

ABOUT THE CURIOUS INVESTOR GROUP

SUBSCRIBE

SIGN UP TO THE WEEKLY

PARTNERS

TESTIMONIALS

CONTRIBUTORS

CONTACT US

MAGAZINE ARCHIVE

PRIVACY POLICY

SEARCH

-- CATEGORIES --

GREEN CHRONICLE

PODCASTS

THE AGENT

ALTERNATIVE ASSETS

THE ANALYST

THE ARCHITECT

ASTROPHYSIST

THE AUCTIONEER

THE ECONOMIST

EDITORIAL NOTES

FACE TO FACE

THE FARMER

THE FUND MANAGER

THE GUEST ESSAY

THE HEAD HUNTER

HEAD OF RESEARCH

THE HISTORIAN

INVESTORS NOTEBOOK

THE MACRO VIEW

POLITICAL INSIDER

THE PROFESSOR

PROP NOTES

RESIDENTIAL INVESTOR

TECHNOLOGY

UNCORKED

Let’s chat about AI

by | Sep 11, 2023

Golden Oldie

Let’s chat about AI

by | Sep 11, 2023

Originally published July 2023.

The professional classes are currently, and rightly, obsessing about the impact that AI will have on the service sector as ChatGPT gets exponentially smarter. The current iteration of ChatGPT – the most popular AI – is said to have an IQ equivalent of around 155, meaning that the next one will, quite literally, be the “cleverest” thing in history.

Mark Twain once said about risk that “the thing that kills you isn’t what you don’t know, but the thing that you know to be absolutely true but turns out not to be so.” Are we ready, then, for if AI tells us things we know to be true are not quite so?

Think of the three so-called certainties that have driven western economies to the brink of disaster in recent years – what I refer to as the triple zeroes; Zero Interest Rates, Zero Carbon and Zero Covid. All three were globalist policies presented by the white-collar classes as the only option to solve an imminent problem that they had determined based on – rather basic – computer-modelling. None of the three have ever been in a political manifesto, none have ever been subject to cost-benefit analysis, and none have been allowed any challenge, either to the thesis or to the models – which all incidentally fail the scientific method and all fail to match real world outcomes. The fact that the policies have all largely benefitted the 1% rather than the 99% is consistent with Charlie Munger’s observation “show me the incentive and I will show you the behaviour.” However, this is less about self-interest being the driver of the policy than it is about why the white-collar classes will resist any challenge.

So why will this change with AI? For the same reason that most high-priced services like education and medicine will come down in price; because it will break the guild of “academics” who prevent challenge to the “science.” The first line of defence for all three of these disastrous policies is the logical fallacy known as “appeal to authority” which not only prevents discussion of the theory, but also, more importantly, prevents discussion of the associated policy. “You don’t have a PhD in Economics/Climatology/ Virology, so your opinion is invalid” conveniently ignores the implication that these experts on the problem also somehow have the most valid insights on the solution. However, even if you get past the priesthood hurdle, and you are qualified, you usually get hit with the next one – the bandwagon fallacy – that “95% of climate scientists agree…” “every doctor thinks…” “all Central Bankers believe…” etc. You might be an expert, but you can’t be right unless you are in the groupthink.

Even if these bandwagon statistics were true (which they aren’t), as any real scientist would point out, science is not a consensus business. Unfortunately, policy making is. As noted, ChatGPT is on schedule to becoming “cleverer” than any human who has ever lived, so the exciting thing now is that these first two lines of defence are now broken. A smart human with a super smart AI assistant can now mount a powerful – potentially unstoppable – challenge to this white-collar priesthood.

The current risk is that AI will simply recycle the consensus and thus accidentally reinforce the bandwagon fallacy. However, going forward, the real power of AI depends on the questions you ask it, thus I would pick up on my point about logical fallacies and posit the following question:

Without using any of the top 10, or so, logical fallacies – appeal to authority, the bandwagon fallacy, correlation equals causation fallacy, appeal to emotion, straw man analogies, false dichotomies, slippery slope, the Texas sharpshooter (cherry-picking statistics), appeal to incredulity, the middle ground fallacy, and Ad Hominem attacks – can you make the case for:

  • 1. Quantitative easing to create inflation and higher interest rates to solve it?
  • 2. Man made global warming and the need for zero carbon by 2050?
  • 3. The cost-benefit of masks, lockdowns, and mandatory vaccinations in terms of medical benefits or risks to the otherwise healthy population?

Then, for all three policies please also examine the evidence provided to support the original thesis of a problem, the accuracy of the modelling in terms of real-world outcomes, and examine the case for alternative policies, including doing nothing.

I might also throw in “please look for patterns of behaviour in people and institutions, using any of the listed logical fallacies, to shut down debate in the three policies being discussed. Then, present their case without any of the logical fallacies.” After all, just because people argue from fallacy doesn’t necessarily mean they are wrong (that is known as the “fallacy” fallacy!) but it does mean we should seek out the facts, if any. And then I would ask “finally, please provide an assessment of which companies, groups, and individuals have gained or have lost in monetary terms from the implementation of these policies.”

Obviously, we already know what the answers are to all these questions, but we are currently just not allowed in the room to discuss them. However, as and when ChatGPT can answer all these questions, things are going to get pretty uncomfortable for the white-collar class – a lot of things they know for certain to be true are going to turn out not to be so. It might not literally kill them, in Mark Twain’s words, but it will hopefully kill the policies.

About Mark Tinker

About Mark Tinker

Mark Tinker is chief investment officer and managing director of Toscafund HK Limited, part of Toscafund Asset Management LLP, a London-based specialist asset management and investment firm with around US$5bn in assets. He is also the founder of Market Thinking Limited. Market Thinking is rooted in behavioural finance and believes that understanding the different dynamics of short-term traders, medium-term asset allocators and long-term investors is the key to understanding financial market behaviour, and thus investment risks and opportunities. Mark has over 35 years’ experience as an investor, market strategist and economist. Having spent more than 20 years as a sell side strategist, and being top rated on numerous occasions and surveys, he moved to investment management in 2006 to run global equity portfolios in London and subsequently moved to Hong Kong in 2013 to help establish an investment management business for a top 20 international asset manager. He first started writing investment weeklies for his employers in 1989, developing a style characterised under the title Market Thinking, and has been a regular commentator and presenter on CNBC, Bloomberg and other business channels.

INVESTOR'S NOTEBOOK

Smart people from around the world share their thoughts

READ MORE >

THE MACRO VIEW

Recent financial news and how it connects across all asset classes

READ MORE >

TECHNOLOGY

Fintech, proptech and what it all means

READ MORE >

PODCASTS

Engaging conversations with strategic thinkers

READ MORE >

THE ARCHITECT

Some of the profession’s best minds

READ MORE >

RESIDENTIAL ADVISOR

Making money from residential property investment

READ MORE >

THE PROFESSOR

Analysis and opinion from the academic sphere

READ MORE >

FACE-TO-FACE

In-depth interviews with leading figures in the real estate/investment world.

READ MORE >