Professor Bamshad Mobasher relates ways to adopt AI responsibly.
Building Trust in the Age of AI: Navigating Risk and Responsibility
Artificial intelligence is no longer a distant future, instead it’s reshaping how Chicago businesses operate today. From hiring decisions to customer service, AI tools are being rapidly deployed across organizations. But as Dr. Bamshad Mobasher (Director of the DePaul AI Institute) emphasizes, the real question isn’t whether to adopt AI, but how to do so responsibly.
“AI’s present matters more than its imagined future,” Dr. Mobasher began, quoting a 2023 article from The Atlantic. While much public discourse focuses on futuristic scenarios of AI supremacy, the tangible impacts on society are happening right now, in the everyday decisions organizations make about deploying these powerful tools.
The Hidden Power Dynamics of Modern AI
There is a critical perspective often missing from AI discussions: the concentration of power in AI development. Unlike traditional software, today’s AI landscape is dominated by “foundation models“—massive pre-trained systems like GPT that require extraordinary computational resources to build.
Only a handful of companies possess the infrastructure to create these models, which means virtually every AI application being developed by Chicago startups and enterprises relies on technology controlled by these few dominant players. This concentration creates profound implications for accountability, competition, and equitable development. When the race to deploy AI fastest creates pressure to skip ethical evaluations and weaken safety guardrails, the consequences ripple through every downstream application.
Why Ethics Can’t Be an Afterthought
For business leaders, the ethical stakes are both moral and practical. AI decisions now affect hiring, promotions, credit approvals, healthcare, and justice – areas where mistakes carry human costs. But ethical failures also create organizational risks: reputational harm, legal exposure, workforce backlash, and erosion of customer trust.
There are five core ethical principles organizations should embed in their AI strategies. These include
- Beneficence (promoting well-being)
- Non-maleficence (avoiding foreseeable harm)
- Autonomy (respecting human decision-making through “human-in-the-loop” approaches)
- Justice (ensuring fairness and equity)
- Explicability (providing transparency about how AI reaches its conclusions).
That last principle – explicability – poses particular challenges with today’s large language models, which are essentially “black boxes” that predict word sequences based on statistical patterns rather than genuine reasoning. When Google’s AI search once suggested adding “nontoxic glue” to pizza to help cheese stick better (a now-infamous hallucination) it revealed how these systems can generate plausible-sounding nonsense without understanding meaning.
Bias: The Invisible Risk in Your AI Tools
Perhaps the most insidious challenge is bias. Using striking examples from image generation models, the video demonstrates how AI systems absorb and amplify stereotypes from their training data. When prompted to create images of a “typical American teenager” and then make it “more American,” multiple AI models progressively generated whiter subjects surrounded by stereotypical symbols like flags and barbecues.
These biases aren’t limited to obvious visual examples. They can creep into resume screening, loan applications, and performance evaluations in ways that are extraordinarily difficult to detect. Since foundation models trained on biased data propagate those biases to all downstream applications, organizations using these tools inherit risks they didn’t create but remain accountable for.
Practical Strategies for Chicago Businesses
Responsibility operates at three levels:
- Policy.
- Organizational.
- Individual.
For business leaders, there are concrete strategies for responsible AI integration.
At the governance level, organizations need clear policies defining ownership and accountability for AI decisions, with ethics embedded into operational workflows rather than treated as an afterthought. Before deploying any AI tool, companies should conduct continuous audits and testing for bias and fairness, particularly when decisions affect vulnerable populations or sensitive domains like lending or hiring.
Transparency matters throughout the process. Organizations should document the data sources, assumptions, and logic behind their AI systems, maintaining explainability records for stakeholders. When evaluating third-party AI vendors (whether Microsoft Copilot, ChatGPT, or specialized industry tools) businesses must assess their compliance standards and data protection practices. Enterprise-level protections, including encryption and commitments not to use client data for model training, should be non-negotiable.
There is also the emerging problem of “shadow AI,” where employees using free AI tools without official approval potentially expose sensitive organizational data. A staff member using free ChatGPT to draft emails containing confidential information, or a department uploading proprietary documents to an AI transcription service without safeguards, can create serious vulnerabilities. Organizations need both clear policies and enterprise-grade tools that provide appropriate protections.
Empowering Your Workforce
Perhaps most importantly, ethical AI use cannot be achieved through policies alone. Employees are at the frontline of AI deployment, and they need education not just on how to use these tools, but on responsible use and ethical standards. This includes training on recognizing bias, questioning AI outputs, understanding privacy implications, and maintaining critical thinking when evaluating AI-generated content.
Organizations should position employees as defenders of ethical AI use, providing incentives for ethical innovation and integrating AI literacy into professional development. When employees understand both the capabilities and limitations of AI (including the fact that large language models are statistical prediction engines rather than reasoning systems) they make better decisions about when and how to rely on these tools.
A Call for Collective Responsibility
Dr. Mobasher concluded with a philosophical but practical message: which direction AI development takes depends largely on whether we collectively prioritize ethical standards. This means engaging at the civic level to advocate for thoughtful policy, at the organizational level to implement robust governance and testing, and at the individual level to use AI tools responsibly.
For the Chicago business community, the message is clear. AI is here to stay, and it offers genuine potential to improve organizational efficiency and human capabilities. But realizing that potential while avoiding harm requires intentionality. It requires businesses to move beyond asking “Can we deploy this?” to asking “Should we deploy this, and how can we do so responsibly?“
As AI continues its rapid evolution, organizations that build trust through transparent, ethical AI practices won’t just mitigate risk – they’ll gain competitive advantage through stronger stakeholder relationships and sustainable innovation. The future of AI in business isn’t just about the technology. It’s about the values we choose to embed in how we use it.