Using artificial intelligence with a growth mindset.
Artificial Intelligence (AI) has already become an integral part of all of our lives, revolutionising many industries. Its role within the property sector is in its infancy but it will, undoubtedly, play a huge part in everything from reading, correcting and creating complex legal documents to providing the interface between industry professionals and customers and clients.
However, the Utopian dream of many to replace estate agents by unlocking AI’s full potential will require more than just technical expertise; it will demand a growth mindset with a focus on continuous improvement and a significant amount of bullsh*t checking!
A growth mindset, as conceived by Stanford psychologist Carol Dweck and colleagues, emphasises the idea that skills and knowledge can be developed through hard work and dedication. Unlike a fixed mindset, which assumes that abilities are unchangeable, individuals with a growth mindset embrace effort and view challenges as opportunities for learning and growth. Fostering this mindset will be crucial for effective AI development and we must ensure we look to question and critique the outputs that are part of this process as the track record so far is mixed, to say the least!
When raising a child, a parent or guardian doesn’t wait until they’ve mastered every skill before allowing them to explore the world. Similarly, AI benefits from early deployment. Instead of hoarding data behind closed doors, organisations are putting AI to work in real-world scenarios. This approach generates valuable feedback and enriches the algorithm with new data. Just as a child learns to ride a bike by getting on it, experiencing falling off and going again, AI learns by actually doing but there has already been many a scraped knee so far!
A growth mindset acknowledges that learning never stops. AI systems, like curious learners, thrive on experimentation. They learn from both successes and failures.
IBM’s diagnostic AI system, Watson Health, outperformed doctors in laboratory experiments but failed in the field and gave many inaccurate and dangerous therapy suggestions. It was subsequently sold off for “parts”.
The mighty Google recently created images through its Gemini AI that, having had so much racial and gender diversity input into its algorithm, resulted in images of Black and Asian German soldiers from WW2 together with other similar, and inaccurate, historic imagery such as a picture of the Founding Fathers of America that included a much more diverse group of people than were present in reality. The desire for diversity, and to avoid stereotyping, had outweighed historical accuracy.
Microsoft’s chatbot, Tay, was designed to learn from human interactions and engage in slang-filled conversations. Within 24 hours, Tay tweeted, “Hitler was correct to hate the Jews.” The experiment revealed how data can corrupt an AI model, even in a controlled laboratory environment.
Amazon aimed to automate its recruitment process using AI. However, the system turned out to be sexist by favouring white male candidates. This incident highlighted the dangers of biased training data and the need for rigorous testing to prevent discriminatory outcomes.
AI-generated deepfakes can manipulate videos and audio, blurring the line between reality and fiction. These pose huge risks to privacy, security, and public trust. Ensuring responsible use of AI is crucial. As we have seen, AI systems learn from historical data, which may itself contain biases. When used in critical domains like criminal justice or lending, biased algorithms can perpetuate discrimination. AI systems are clearly not infallible. Errors can occur due to flawed design, incorrect assumptions, or unforeseen scenarios. Establishing accountability and transparency will be essential to mitigating risks.
Whilst AI has made remarkable strides, its failures underscore the need for responsible development, rigorous testing, and ongoing vigilance to prevent unintended consequences. As we continue to integrate AI into our lives, understanding its limitations and addressing its shortcomings will be crucial for a successful and ethical future.
Like Elon Musk’s Space X programme, it is moving fast and there are many who feel that it is better to do something, make an error and then quickly correct and move on rather than try and get everything sorted before trying it out. In the world of property, the outcomes generated by AI may be less serious and have more minor consequences than in other areas but they could still be devastating for individuals and businesses whose decision making relies on, or is undertaken, by uninhibited AI.
I’m not advocating a return to a quill pen and parchment; we need to embrace change. But I am wary about how AI may shape the future of the property world. There is currently a mad clamour to introduce new AI systems, many of which, are, by definition, still learning to walk when the users already know how to run. It will certainly help speed up processes and do the “heavy lifting” but, as with many things, rubbish in, rubbish out.
Of course, the public don’t have a huge amount of trust in estate agents and what they say now and so I may be overly concerned about AI propagating this position further. We shall see.