Recently, I had an opportunity to explore explainable AI, safety culture, and the future of accountable systems with Dr. Leilani Gilpin. A recognized leader in the field, Dr. Gilpin’s research spans safety-critical systems, neuro-symbolic methods, and ethical AI. She leads the AI Explainability and Accountability Lab at UCSC, where she and her team work to make intelligent systems more transparent and trustworthy.

Rethinking Explainability and Accuracy-Interpretability Tradeoff

We started our discussion with the definition of Explainability. Dr. Gilpin referred to her widely cited paper Explaining Explanations,” which lays out a two-part framework for explainable AI: a system must be both interpretable (understandable to humans) and complete (reflective of how the model works). 

However, as generative models like GPT become more complex, she pointed out that the definition may need an update. “Chain-of-thought reasoning in large language models may look correct on the surface,” she noted, “but the logical connections often don’t hold up. That breaks the completeness requirement.” 

One of the core tensions in the AI world is the supposed tradeoff between accuracy and interpretability. Dr. Gilpin acknowledged this concern and noted that increasing a model’s complexity by adding neurons or layers can improve performance but may reduce interpretability. To address this issue, she works on neuro-symbolic AI in her lab, which blends the deep learning capabilities of neural networks with the logic-based structure of symbolic reasoning. She explained that this hybrid model offers the promise of high performance without sacrificing interpretability.

Regulatory Frameworks for Ethical AI

When asked about regulatory frameworks, Dr. Gilpin noted that ethical AI frameworks should learn from aviation process evolution and combine third-party oversight with a broader safety culture. She argued for independent regulators to enforce safety checks and accountability standards before AI systems are deployed. Drawing on the example of the FAA, she explained how regulation and culture together made air travel safer and more trusted, something she believes AI urgently needs. Similarly, she also highlighted the importance of establishing AI advisory councils within government, composed of scientists and policy experts rather than just corporate stakeholders. These councils would ensure policymakers receive accurate, technically grounded advice on AI risks. 

Aligning Academic Research with Policy-making

Dr. Gilpin emphasized that academic research should play a central role in shaping AI policy, but current incentive structures make this difficult. She noted that top AI conferences prioritize technical papers over policy or ethics work, which often takes longer to publish and is undervalued. As a result, researchers working on governance may struggle to keep pace with peers focused on core AI. To address this, she suggested elevating the prestige of venues that publish policy and accountability research and encouraging a culture of “slower science” that values depth and impact over quantity. These steps would attract more researchers to engage with policy-relevant work and help bridge the gap between AI research and meaningful regulation.

Dr. Gilpin emphasized that government officials and business leaders must take active responsibility in integrating ethical AI by developing an understanding of the technology’s underlying mechanisms themselves and prioritizing it as a policy agenda. She also advocated for the creation of independent AI advisory councils within government, similar to those for food safety or aviation, to provide ongoing input from scientific experts. 

AI Accountability

Dr. Gilpin responded that responsibility for bias in AI products depends on context, as multiple parties may be involved, including data collectors, model developers, and end users. Without explainability, it’s nearly impossible to determine where things went wrong. Drawing from her PhD work on self-driving cars, she emphasized the need for AI systems to “tell a story” at each stage, much like how fault is assigned in car accidents through testimony and third-party evaluation.

To ensure accountability, she advocated that every component of an AI system be explainable, enabling a regulatory body to assess where the failure occurred—whether in the data, training, or deployment. Without that transparency, assigning blame is speculative, making robust explainability a prerequisite for meaningful regulation.

Bridging the Gap Between Research and Real-World Application

Dr. Gilpin explained that a major challenge in translating her research on explainability into real-world applications lies in the disjunction between what AI researchers consider an effective explanation and what users can understand. Many of the tools developed in academic settings provide technically accurate insights into how a model behaves, but they’re often incomprehensible to stakeholders outside the lab, whether that’s policymakers, domain experts, or everyday users.

She gave an example involving a classification system where a model’s explanation might identify a coffee cup as being similar to a dinner plate or water bottle. While that may make sense from a feature-learning perspective, it wouldn’t be meaningful to a non-expert without additional context. This highlights a core tension in the field: the more faithfully an explanation represents the underlying model, the less accessible it tends to be. Conversely, simplifying it for usability risks losing technical fidelity.

As a result, Dr. Gilpin emphasized that explainability tools are often a “translation layer” away from the people they’re supposed to serve, which makes deployment and adoption especially difficult. Overcoming this gap requires not just better UI or documentation, but fundamentally rethinking how systems are built to serve diverse end users, without compromising on transparency or accountability.

The Environmental Toll of AI

Dr. Gilpin emphasized that environmental sustainability in AI development is a critical yet under-addressed issue, particularly as models grow in size and demand vast computational resources. She cited concerns about the energy consumption and water usage of large-scale AI systems, referencing claims that AI may soon consume more resources than entire countries. Despite these alarming trends, many companies remain uninterested in the environmental impact of their models because current incentives prioritize scale, speed, and profit over sustainability.

To address this imbalance, Dr. Gilpin advocated for the use of lighter, more efficient alternatives, such as symbolic or neuro-symbolic systems, which can perform many tasks with significantly reduced resource demands. These models not only require less data and power but also offer added benefits in interpretability and control.

Dr. Gilpin argued that technical solutions alone are not enough and that policy must drive environmental accountability, similar to how regulations have pushed the automotive industry towards cleaner energy. She stressed the importance of regulatory intervention to align corporate behavior with long-term sustainability goals, ensuring that innovation doesn’t come at the cost of planetary health.

Advice for Students Entering AI Ethics

Dr. Gilpin offered two key pieces of advice for those starting in AI and AI ethics. First, she recommended immersing yourself in the conversation by subscribing to newsletters, reading blogs, and following updates from major AI ethics conferences. You don’t need to dive into technical papers immediately; just staying informed and aware of key issues is a strong starting point.

Second, she stressed the importance of building a solid foundation in mathematics. Whether through formal coursework or independent study, understanding logic, statistics, and mathematical reasoning is crucial for engaging meaningfully with AI systems. Together, staying current on ethical discourse and developing quantitative skills can provide both the awareness and the tools needed to contribute effectively to the field.

My conversation with Dr. Gilpin made clear that building ethical, transparent, and sustainable AI systems is not just a technical challenge, but a societal one. Her work bridges the gap between theory and practice, urging researchers, policymakers, and technologists to rethink how we define, evaluate, and regulate intelligent systems.

Trending