DePaul AI Institute Director and IIT Ethics Scholar Challenge Us to Reimagine AI Development
What happens when cutting-edge technology meets timeless ethical questions? At the International Vincentian Business Ethics Conference, two leading scholars explored the urgent challenges and promising pathways forward for artificial intelligence.
On October 25th, 2025, Bamshad Mobasher, Director of the DePaul AI Institute and Professor of Computer Science, joined Elisabeth Hildt, Professor of Philosophy and Director of the Center for the Study of Ethics in the Professions at IIT, for a thought-provoking keynote panel. Their discussion, “Ethics and AI: Power, Access, and Trust,” revealed both the serious risks embedded in today’s AI systems and the concrete frameworks we can use to build something better.
The Power Problem: Who Controls AI?
Mobasher opened with a sobering reality check. While we often hear about AI’s utopian promises or catastrophic risks, we’re overlooking something more immediate: the concentration of power in the hands of a few tech giants.
The key shift happened around 2017-2018 with the emergence of “foundation models“—the massive pre-trained systems underlying tools like ChatGPT. Unlike traditional AI models trained on specific datasets for specific tasks, these foundation models are trained on essentially all available data on the internet. Only a handful of companies—OpenAI, Google, Microsoft, and a few others—possess the computational resources and capital to build them.
This creates what Mobasher calls a “tech oligarchy.” Universities can’t compete. Smaller companies remain dependent. And everyone using AI applications downstream relies on models controlled by these dominant players.
The implications extend far beyond technology. Mobasher highlighted troubling trends: companies racing to deploy models before robust safety testing, internal ethics teams being quietly dismantled (Google dropped its policy against developing AI that could “cause overall harm” in early 2025), and AI being weaponized for surveillance and control rather than human empowerment.
From exploitative labor practices to discriminatory pricing algorithms, from workplace surveillance systems to the consolidation of government data for potential misuse—the drive for profit and market dominance is reshaping society in ways that demand our attention.
From Trust to Trustworthiness
Elisabeth Hildt reframed the conversation around a critical distinction: we shouldn’t ask whether we trust AI, but whether AI systems are trustworthy.
Trust, as she explained, is inherently subjective—it says as much about us as it does about the technology. We can over-trust systems that don’t deserve it. Studies show people following malfunctioning robots in emergency evacuations, or attributing human-like reliability to chatbots that routinely “hallucinate” false information.
Instead, Hildt pointed to the European Union’s Ethics Guidelines for Trustworthy AI, which establish concrete criteria. Trustworthy AI must be:
- Lawful (complying with regulations)
- Ethical (respecting fundamental principles)
- Robust (technically sound and socially beneficial)
The framework identifies seven key requirements: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity and non-discrimination, societal and environmental well-being, and accountability.
But how do we move from abstract principles to practical action? Hildt described the case study-based approach her international team has developed—analyzing specific AI tools in their social contexts, identifying stakeholders, mapping ethical issues to these requirements, and working iteratively to strengthen systems throughout their entire life cycle.
The Explainability Challenge
A recurring theme emerged during audience discussion: the “black box” problem. When a bank’s AI denies your loan application, the banker often can’t explain why. When hiring algorithms screen out candidates, the criteria remain opaque. Deep learning models with countless layers make it virtually impossible to trace how decisions emerge.
Mobasher acknowledged this as a serious issue predating even large language models, though they’ve made it worse. There’s active research in “explainable AI,” including newer “reasoning models” that attempt to show their step-by-step process. But we have far to go, especially in high-stakes domains like healthcare, finance, and criminal justice—where explainability isn’t a nice-to-have feature but a fundamental requirement for justice.
Beyond Doom and Gloom: Pathways Forward
Both speakers emphasized that despite these challenges, we’re not powerless. Mobasher outlined key strategies:
- Embed ethics from the start. Don’t treat ethical considerations as an afterthought or a single lecture at the end. Integrate responsible development practices throughout the entire design and deployment process.
- Involve affected communities. Stakeholders who will be impacted by AI systems should participate in their development, not just be subjects of experimentation.
- Promote transparency and open source. Encourage the use of open-source models over proprietary black boxes, making systems auditable and accountable.
- Build regulatory frameworks. From risk assessments to algorithmic audits, we need policy structures that protect public interest. The European Union’s AI Act, with its tiered approach to risk levels, offers one model.
Hildt’s work demonstrates that considering ethical dimensions isn’t just morally right—it can improve outcomes. She shared how one development team initially viewed ethical assessment as an obstacle, but incorporating those considerations ultimately helped them win competitions for the quality of their tool.
The Role of Education
For those of us in universities, the message is clear: we have urgent work to do. As Mobasher noted, while AI may eliminate some jobs, one growing field will be AI ethics specialists—people who can navigate technical systems, understand ethical frameworks, assess risks, and train others in responsible development.
We must prepare students not just to build AI, but to build it well. This means interdisciplinary education bridging computer science, philosophy, social sciences, and affected domain areas. It means hands-on practice with ethical frameworks, not just theoretical study. And it means fostering the critical thinking skills to question not just how to implement AI, but whether and for whom.
A Call to Collective Action
The panelists left us with both sobering realities and realistic hope. Yes, we face enormous challenges: concentrated power, eroding safety standards, risks to marginalized communities, and regulatory gaps. But we also have concrete frameworks, growing research communities, and increasing public awareness.
As Mobasher concluded, “This requires collective action on the part of everyone who is using AI tools, including governments, legislative bodies, and others.” From developers to policymakers, from educators to users, we all have roles to play in shaping AI that genuinely serves human dignity, rights, and flourishing.
The question isn’t whether AI is here to stay—it is. The question is what kind of AI future we’ll build together.