I’ve been tracking AI governance at the local, national, and global levels, and covering policies like California SB 1047 and the EU AI Act in this blog. While these posts provide a broad overview of the AI regulations and challenges, I also want to explore how leaders in academia, business, and government are looking at these issues. I am launching this interview series to understand their perspectives. My first conversation is with Ms. Ann Skeet, Senior Director of Leadership Ethics at Santa Clara University’s Markkula Center for Applied Ethics.
Ms. Skeet has extensive experience in AI ethics, corporate leadership, and governance, collaborating with organizations ranging from the Silicon Valley tech companies to the Vatican. Our conversation covered a wide range of topics in AI Ethics and here are some of the key takeaways.
AI in Corporations
Ms. Skeet emphasized that AI governance cannot be an afterthought. Organizations must embed ethics at every stage of AI adoption rather than trying to fix issues later. One of her key recommendations is that businesses should appoint an AI champion—someone who can coordinate AI-related decisions across departments and ensure that risks related to bias, transparency, and workforce impact are proactively managed. The corporate boards and C-Suite executives must be actively engaged in AI governance, and prioritize risks associated with AI such as energy consumption, sustainability, and regulatory compliance.
AI Ethics in Education
Ms. Skeet is involved in a research group through the Vatican examining AI’s impact on education. This group aims to advance the perspective that technology should improve lives while maintaining human dignity at the center of decision-making. The project addresses practical issues such as preparing students for a technologically transformed workplace and managing different responses to AI adoption.
AI’s Impact on Jobs
One of the biggest concerns about AI is job displacement. While some reports predict mass layoffs, Ms. Skeet pointed out that AI will likely transform job roles rather than eliminate them entirely in the short term. Companies have an ethical responsibility to communicate transparently about potential disruptions, offer retraining and upskilling opportunities to help employees adapt, and ensure equal access to AI tools so that opportunities are distributed fairly.
Role of Government
Regulating AI remains a complex challenge. Ms. Skeet believes governments should work towards the common good and involve themselves with the implementation of transformational technologies like AI. She pointed out the need for partnerships across sectors, involving private, public, and nonprofit entities to develop AI policies that balance innovation with consumer protection. While legislators are becoming more informed about the technology, political polarization and systemic inefficiencies continue to slow down the process. Some governments are taking a risk-based approach to AI regulation and prioritize high-risk, pressing issues like deepfakes, and biased decision-making.
Moving from Principles to Action
I think this was one of the most compelling parts of our conversation. Ms. Skeet highlighted the importance of translating ethical principles into concrete guidelines and practices. She explained her organization’s framework which breaks down principles into guiding, specifying, and action principles.
- Guiding principles: High-level principles like respect for human dignity and rights.
- Specifying principles: Breaking down guiding principles into more specific terms.
- Action principles: Translating specific principles into actionable policies, such as providing mechanisms for appealing AI system decisions.
She notes that many companies state principles such as accountability and transparency without defining them or turning them into actionable policies, making this approach both confusing and ultimately ineffective.
Advice for Exploring AI Ethics
Ms. Skeet emphasizes that ethics is messy and would require an iterative and thoughtful approach. She recommends using ethical decision-making frameworks, such as the “Framework for Ethical Decision Making,” by the Markkula Center for Applied Ethics. Explorers and learners should develop a personal code of ethics and view ethics through a positive lens, focusing on human flourishing. AI should be human-centric and work for people, not the other way around.
Ms. Skeet’s insights provide a comprehensive overview of the critical issues at the intersection of AI and ethics. Her emphasis on practical application, cross-sector collaboration, and a human-centric approach offers valuable guidance for navigating the ethical complexities of AI.





