The Dangers of Using A.I. for Medical Advice Instead of Doctors — A Case With Real Consequences

The Dangers of Using A.I. for Medical Advice Instead of Doctors — A Case With Real Consequences

The growing integration of artificial intelligence into everyday decision-making has raised new concerns about its reliability, particularly in high-stakes areas like health care. A case involving Ben Riley and his father, Joe Riley, illustrates how generative A.I. tools can influence personal medical decisions, sometimes with serious consequences when their outputs are treated as authoritative.

Ben Riley, who had spent years studying and writing about the limitations and risks of A.I., became aware of a troubling situation while reviewing his father’s medical records. His father, a 75-year-old retired neuroscientist diagnosed with chronic lymphocytic leukemia, had delayed treatment for months despite repeated recommendations from his oncologist. Medical notes emphasized urgency, warning that postponing care could lead to worsening illness and death, yet Joe Riley had reassured his family that treatment was not immediately necessary.

The situation became clearer when Ben discovered that his father had been relying heavily on generative A.I. tools, including Perplexity AI, to research his condition. Joe Riley used these tools to interpret lab results, review scientific literature, and explore alternative explanations for his symptoms. He became convinced that he had developed a more aggressive condition known as Richter’s transformation and that the recommended treatments would worsen his health—claims his oncologist repeatedly said were unsupported by clinical evidence.

Joe Riley’s reliance on A.I. reflected both his long-standing interest in technology and his skepticism toward the medical system. Over time, he developed confidence in the information provided by A.I. systems, describing them as powerful tools for learning. In one message to his son, he wrote that “it is amazing how much one can learn with a week or two of the right A.I. programs,” suggesting that his independent research had validated his conclusions, even when they conflicted with medical advice.

ChatGPT screenshot (OpenAI / ChatGPT, 2024)

His oncologist, Dr. Eddie Marzbani, attempted multiple approaches to persuade him to begin treatment, explaining that modern therapies could significantly extend his life. According to medical assessments, there were no clinical signs supporting Joe Riley’s belief that his condition had transformed into a more aggressive disease. Despite these reassurances, he continued to question the diagnosis and declined treatment, influenced in part by A.I.-generated summaries that presented misleading or inaccurate interpretations of scientific research.

The limitations of those A.I. outputs became evident when Ben Riley shared one such report with medical experts whose work had been cited. One researcher, Dr. David Bond, found that the document included claims that were illogical, misrepresented studies, and contained statistics that appeared fabricated. The report’s authoritative tone and structured presentation, however, made it difficult for a non-expert to identify these flaws, highlighting a broader issue with generative A.I.: its ability to produce convincing but unreliable information.

As Joe Riley’s condition worsened, his health declined significantly. By the time he agreed to begin treatment, more than a year after it was first recommended, his body was too weakened to tolerate the therapy effectively. The delayed intervention reduced the likelihood of a positive outcome, and complications followed shortly after treatment began.

Throughout this period, communication between father and son became increasingly strained. Ben Riley, despite his background in analyzing A.I. systems, struggled to counter the influence these tools had on his father’s thinking. The disagreement underscored a key challenge in the adoption of A.I.: even informed users may overestimate the accuracy of outputs, particularly when those outputs align with their existing beliefs.

Joe Riley died later that year, with his cancer listed among the causes of death. In reflecting on the events, Ben Riley noted that A.I. was not the sole factor in his father’s decisions, acknowledging that personal skepticism toward doctors also played a role. However, he emphasized that the technology contributed by presenting flawed information with the appearance of scientific credibility, making it difficult to distinguish fact from error.

The case highlights ongoing concerns about the use of generative A.I. in health contexts, especially as technology companies continue to expand consumer-facing medical tools. While these systems are designed to assist with information gathering, they are not substitutes for professional medical judgment. The experience of the Riley family illustrates how reliance on such tools, without appropriate safeguards or expertise, can complicate decision-making in critical situations.

Releated Posts

Ilhan Omar Says Republicans Are Trying to Sell Out Our Most Pristine Waters to a Foreign Mining Corporation

Rep. Ilhan Omar (D-MN) criticized a Senate vote to overturn federal protections against mining in the watershed of…

ByByZane Clark Apr 16, 2026

Three Men Arrested in $1 Million Stolen Lego Cargo Theft in California

Deputies from the Kern County Sheriff’s Office Mojave Substation responded to a call about suspicious vehicles on April…

ByByZane Clark Apr 13, 2026

Leave a Reply

Your email address will not be published. Required fields are marked *