What does it mean to trust AI, not just as a technology, but as a decision-making agent? That question anchored the first fireside chat of the evening, moderated by Jyothi Prakash Gandhamaneni (JP) of LatentView Analytics, in conversation with Carol Rosan of Stellarus.
In response to JP’s opening question on how trust is built in high-stakes contexts like healthcare, Carol offered a human-centric framing: Credibility, reliability, and intimacy. “AI can often meet the first two,” she said, “but it falls short on intimacy, empathy, and human connection.”
At Stellarus, Carol has seen how both healthcare and financial services wrestle with fragmented governance and ambitious growth demands. She has also led a transformation to consolidate years of unstructured clinical data (e.g., doctors’ notes, claims records, patient history, etc.) into a unified platform using GenAI. That effort yielded a significant reduction in ingestion time. Yet, she emphasized that speed is just one part of the equation. “Everyone wants faster access to data,” she said, “but without proper context, speed may lead to harm.”
This insight reflects broader industry risk. A recent Trustmarque report found that while 93% of organizations use AI in some form, only 7% have embedded governance frameworks, and just 8% integrate governance into the software development lifecycle. Most businesses lack oversight tools such as audit trails, accountability mechanisms, and ongoing bias testing.
JP steered the discussion toward explainability. Regression models offer transparency by default, but modern GenAI is opaque. “Trust isn’t about making promises,” Carol said. “It’s whether your model delivers on what it promised.” This includes performance validation, transparent outcome tracking, and interpretability.
JP also pushed the conversation into user experience, where fairness isn’t just an algorithmic outcome, but a design choice. In healthcare, conversational bots must interpret legal and clinical language while signaling limits and decision boundaries. “A bot doesn’t need all the answers,” Carol said. “But it should guide you to the right questions.”
Asked about where GenAI can make the biggest near-term impact, Carol mapped out three practical areas where intelligence meets scale:
- Diagnostics – Assisting clinicians with image analysis and result interpretation
- Preventative Care – Predicting chronic disease risks for earlier intervention
- Telemedicine – Expanding personalized care at scale
Responding to JP’s final question on risk boundaries, Carol said, “There’s a difference between using AI to tell you if a procedure is covered by your insurance plan and using it to actually recommend for or against the surgery.” That line, she stressed, must remain sharply drawn. AI can support health decisions, but it should not replace human judgment.