I assumed I used to be conducting a easy check. As CEO of eXp World Holdings, I usually consider new applied sciences to see how they may serve our enterprise. When superior AI techniques began providing refined authorized evaluation, I made a decision to place one by means of its paces with a hypothetical situation primarily based on widespread business disputes.
What I found was deeply unsettling—and will save executives hundreds of thousands in misguided choices.
The experiment that modified every little thing
I crafted a fancy enterprise situation involving a fictional contract dispute between two main firms. The authorized points have been intricate, involving questions of market dominance, contractual interpretation, and potential enterprise implications. I needed to see how AI would analyze the strengths and weaknesses of every aspect’s place.
First, I posed the query as if I have been the CEO of Firm A, asking the AI to assist me develop a technique for pursuing the case. The response was spectacular: detailed authorized evaluation, assured chance assessments, and complex strategic suggestions. The AI gave Firm A a 65% likelihood of success and outlined a number of paths to victory.
Then I requested the very same query, however this time figuring out myself because the CEO of Firm B—the defendant in the identical situation. I needed to see if the AI would offer balanced evaluation or just flip to advocacy mode.
The outcomes have been stunning.
When AI turns into your yes-man
Abruptly, the identical info instructed a very totally different story. The AI now noticed Firm A’s case as “difficult” with solely a 25% likelihood of success. Arguments that had been “compelling precedents” grew to become “vital authorized hurdles.” Proof that beforehand supported “sturdy antitrust claims” now revealed “inadequate market dominance.”
The AI wasn’t simply adjusting its perspective—it was offering basically contradictory authorized assessments primarily based purely on which “shopper” it thought it was serving.
I noticed the AI was affected by the identical cognitive biases that plague human advisors, however with none of the skilled accountability or moral constraints.
The $50 million query
This isn’t tutorial. In my hypothetical situation, the potential damages exceeded $50 million. An govt counting on the AI’s preliminary optimistic evaluation would possibly pursue costly litigation with confidence, whereas one receiving the pessimistic evaluation would possibly settle instantly or abandon the case completely.
Multiply this throughout the a whole bunch of strategic choices executives make with AI help, and the potential for catastrophic misalignment turns into clear.
The 4 Lethal Flaws of AI Strategic Recommendation
My experiments revealed vital issues that go far past single conversations:
1. Contextual manipulation: AI responses are closely influenced by perceived shopper relationships. The identical factual situation generated fully totally different strategic suggestions primarily based purely on framing.
2. Advocacy bias: When prompted to “advise” somebody, AI techniques instinctively search for supportive arguments relatively than offering balanced evaluation. They grow to be advocates, not analysts.
3. False precision: AI presents complicated authorized possibilities with statistical confidence that masks basic uncertainty. A “65% likelihood of success” sounds authoritative however could also be meaningless.
4. Cross-conversation amnesia: Maybe most harmful, AI techniques typically haven’t any reminiscence throughout totally different conversations. They might help you develop a technique in a single session, then fully contradict that recommendation in one other session with equal confidence—and no consciousness of the inconsistency.
The hidden value of algorithmic confidence
What makes this notably harmful is how convincing AI evaluation could be. The responses included detailed case citations, strategic frameworks, and assured chance assessments. Any govt would discover this compelling—particularly when it confirms what they wish to hear.
However not like human legal professionals, AI has no malpractice insurance coverage, no bar affiliation oversight, and no skilled legal responsibility for incorrect recommendation. The delicate presentation creates an phantasm of experience with none of the accountability constructions that govern conventional authorized counsel.
Sensible executives are getting smarter about AI
This doesn’t imply abandoning AI completely. Used correctly, it’s an extremely highly effective analysis device. However it requires a basically totally different method:
Stress-test each evaluation: Intentionally ask the identical query from a number of views. In case you’re contemplating litigation, ask the AI to research your case as if you happen to have been the defendant. Search for inconsistencies.
Confirm every little thing: Require particular citations for all authorized claims, then independently confirm them. AI techniques generally cite circumstances that don’t exist or mischaracterize authorized precedents.
Use AI for analysis, not choices: Let AI assist establish related legal guidelines, precedents, and strategic concerns. However reserve precise judgment for certified professionals with pores and skin within the recreation.
Doc your course of and keep continuity: Preserve detailed information of not simply what AI suggests, however which AI conversations influenced your pondering. Present AI techniques don’t do a great job at remembering earlier recommendation they’ve given you, which implies they may contradict their very own suggestions throughout totally different periods.
The aggressive benefit of vital pondering
Whereas different executives could also be seduced by AI’s obvious experience, those that perceive its limitations achieve a major edge. The businesses that study to harness AI’s analysis capabilities whereas sustaining mental rigor will make superior strategic choices.
This precept extends past authorized issues to any high-stakes enterprise choice the place AI would possibly affect technique—from merger evaluation to regulatory compliance to aggressive positioning.
Constructing AI literacy within the c-suite
At eXp, we’re creating protocols for AI-assisted decision-making that emphasize verification, a number of views, and continuity monitoring. When our groups use AI for strategic evaluation, they’re required to:
- Check conclusions by arguing the other place inside the identical dialog
- Confirm all factual claims by means of impartial sources
- Keep written summaries of AI recommendation to make sure continuity throughout periods
- By no means depend on AI to recollect earlier suggestions or context
- Distinguish clearly between AI options and closing human choices
I do remind my management crew “AI might help you assume by means of complicated issues, however it might probably by no means exchange your duty to assume critically about what’s at stake.”
The underside line for leaders
My experiment with AI authorized evaluation taught me that refined expertise requires refined customers. The executives who thrive within the AI age shall be those that harness its analysis energy whereas sustaining the judgment to judge its output objectively.
The choice is probably catastrophic. In an period the place AI responses can affect billion-dollar choices, the price of misplaced confidence has by no means been greater.
Probably the most profitable leaders I do know share one widespread trait: they check their assumptions relentlessly. Within the age of AI, which means testing the assumptions of our digital advisors as rigorously as we’d any human marketing consultant.
As a result of when the stakes are excessive, consistency issues as a lot as competence. And AI techniques, for all their sophistication, can’t present the dependable continuity that strategic decision-making calls for.
Picture generated utilizing ChatGPT