Episode 39 – What If the Real Value of AI Is Not Solving Problems, but Finding Them?

2

Professionals are trained to solve problems. Law school teaches you to spot issues in a fact pattern and apply the right rule. Business school teaches you to identify constraints and work within them. The entire structure of professional services is built around a client bringing a problem and a professional delivering a solution. 

That model works well when the problem is known. The contract needs reviewing. The dispute needs settling. The regulation needs complying with. But the most expensive problems in any organisation are the ones that nobody has identified yet. The compliance gap that only becomes visible after an enforcement action. The contractual drift that accumulates over years. The operational risk hiding in a clause that everyone signed off on because it looked standard. These are problems that sit quietly in systems, processes, and documents until something goes wrong.

The professional habit of waiting for a problem to be defined before engaging with it is understandable. But it is also a limitation. And it is one that AI is particularly well-suited to address.

From Answering Questions to Raising Them

Problem finding is not a new idea. It has been studied in design thinking and innovation research for decades. The insight is simple: the quality of a solution depends on the quality of the problem it addresses. A perfectly executed answer to the wrong question is still a waste of effort. What is new is that AI makes problem finding possible at a scale and speed that was not available before.

Consider what happens when a law firm uses AI to review a portfolio of 500 client contracts. The stated task might be to check for GDPR compliance. But the AI reads everything, and in doing so, it can surface inconsistencies that no one asked about: contracts where indemnity caps do not match the risk profile, jurisdictions where force majeure clauses have not been updated since before the pandemic, or counterparties where the terms have quietly shifted in their favour over successive renewals. None of these are the problem the firm was hired to solve. All of them are problems worth knowing about.

The same logic applies outside legal work. A consultancy running AI across a client’s procurement data to audit spend might discover that three different departments are buying the same service from three different vendors at three different prices. A financial services team using AI to check regulatory filings might find that the language in their disclosures has drifted from the language in their internal policies, creating a gap that no one intended but that a regulator would notice.

In each case, AI is not just answering the question it was asked. It is surfacing questions no one thought to ask. That is where the real value sits.

Why Humans Still Set the Direction

There is an important distinction here. AI can detect patterns and anomalies at scale. It can flag things that look different from what came before. What it cannot do, at least not reliably, is determine which of those anomalies actually matter. A payment term that shifted from 30 to 52 days might be a serious cash flow risk, or it might be an intentional concession made during a strategic negotiation. The AI does not know the difference. A human does.

This is where the mindset shift matters. The professional who sees AI only as a problem-solving tool will use it to answer questions faster. The professional who sees AI as a problem-finding partner will use it to ask better questions in the first place. The second approach is harder. It requires comfort with ambiguity, a willingness to follow up on signals that may lead nowhere, and the judgment to decide which findings deserve attention and which are noise. But it is also where AI creates the most value: not by replacing professional thinking, but by giving professionals more to think about.

And here is the honest counterpoint: problem finding can also create problems of its own. An AI that flags everything unusual generates noise Teams that chase every anomaly risk losing focus on the work that actually needs doing. The goal is simply to build enough awareness into your workflows to catch the things that matter before they become expensive, without turning every review into an open-ended exploration.

Start Looking for What You Are Not Looking For

Most organisations use AI to do things they already do, just faster. Review contracts faster. Research case law faster. Draft documents faster. That is a reasonable starting point. But it stops short of the bigger opportunity.

The firms and companies that get the most from AI will be the ones that use it to see what they were not looking for. It is very unlikely that a human team can read 500 contracts side by side and notice a gradual shift in payment terms. Or that an individual lawyer, no matter how experienced, can hold the full picture of a portfolio in their head and spot the slow-moving risks that only become visible at scale. AI can. And when it does, it gives professionals the chance to act before a problem becomes a crisis.

That is the shift worth making. From problem solving to problem finding. From answering the question on the table to asking whether it is the right question. The technology is ready. The harder part, as always, is changing the habit.

At Better Ipsum, we help law firms and corporate legal departments use AI not just to work faster, but to see further. If you want to move from reactive problem solving to proactive problem finding, let’s talk.

Share:

Subscribe Our Newsletter