Beyond the Algorithm: Building an Inclusive Future Where Everyone Belongs at the AI Table
When Professor William “Marty” Martin asks his audiences to imagine “Lydia“—a successful middle manager and Excel wizard with ten years of experience who applied to 70 jobs and received just one interview—a uncomfortable recognition settles over the room. In our rapidly evolving business landscape, Lydia’s story isn’t fiction. It’s a cautionary tale that highlights one of the most pressing challenges facing Chicago’s business community today: ensuring that artificial intelligence enhances rather than undermines workplace belonging and human potential.
At the April 2025 presentation, Martin delivered insights that should give every business leader pause. His talk, “AI in an Inclusive World | Ensuring We are All at the Table,” revealed both the tremendous opportunities and significant risks that AI presents for creating truly inclusive workplaces.
The Lydia Effect: When Expertise Becomes Obsolete Overnight
Martin’s case study of Lydia illuminates a stark reality. Despite her impressive credentials and quantifiable achievements, Lydia represents millions of professionals whose core competencies—spreadsheet mastery, routine task execution, and analytical skills with large datasets—can now be performed more efficiently by AI systems. The audience’s immediate concern for Lydia wasn’t misplaced. As Martin noted, drawing from Microsoft and LinkedIn’s 2024 Work Trend Index, 66% of leaders say they would not hire someone without AI skills.
But Lydia’s challenge extends beyond skill gaps. When she applies for positions online, she encounters algorithmic screening systems that may never give her a fair chance. Martin shared sobering research showing how AI hiring algorithms demonstrate stark racial and gender biases: resumes with white-associated names were selected 85% of the time versus just 9% for Black-associated names. Perhaps most troubling, Black men’s resumes were never selected when other candidates were available.
“The algorithm doesn’t design itself,” Martin emphasized. “It is designed.” This fundamental truth underscores why inclusive design must be intentional, not accidental.
Scale Amplifies Everything—Including Bias
One of Martin’s most compelling insights concerned the amplified impact of algorithmic bias compared to individual human bias. While a single hiring manager might affect 40 candidates annually, an algorithm screening 80,000 applicants can perpetuate discrimination at an unprecedented scale. When we automate for efficiency and reach, we also automate whatever biases exist in our systems and data.
This reality became painfully clear in Amazon’s well-publicized hiring algorithm debacle, where the system favored applicants based on male-dominated activities like rugby and lacrosse, simply because the training data reflected the company’s historically male-heavy workforce. The lesson is clear: biased training data inevitably produces biased outcomes, and those outcomes affect real people’s lives and livelihoods.
The Human Edge: What AI Cannot Replace
Despite AI’s remarkable capabilities, Martin outlined crucial areas where human expertise remains irreplaceable. While AI excels at pattern recognition, mathematical computation, and objective decision-making, it struggles with novel situations, unique insights, intuitive thinking, and genuine social interaction. These limitations point toward what Martin calls the “human edge“—our capacity for emotional intelligence, creativity, strategic thinking, and authentic connection.
The key insight for professionals like Lydia isn’t to compete with AI on computational tasks, but to become the “human in the loop” or “expert in the loop.” This means developing both AI literacy and the distinctly human skills that complement artificial intelligence. As Martin explained, it’s not either-or thinking, but both-and thinking that creates career resilience.
The PEACE Framework: Deploying AI with Humanity
Perhaps Martin’s most valuable contribution was his PEACE framework for implementing AI in psychologically safe ways:
- Psychological Safety ensures people can express concerns and alternative viewpoints without fear of retaliation.
- Empathy acknowledges that anxiety about AI changes is rational and deserves compassionate response.
- Acceptance recognizes everyone’s capability and motivation to grow and adapt.
- Connection maintains human relationships that provide crucial social support.
- Embracing Agency respects individuals’ ability to make decisions about their own career paths.
This framework addresses a growing concern Martin raised about digital isolation. As we increasingly rely on AI as digital coworkers, we risk disconnecting from the human relationships that provide meaning, support, and growth opportunities. The World Health Organization has identified loneliness as a global epidemic, and workplace AI deployment could exacerbate this challenge without thoughtful implementation.
Inclusive Design: Prevention Over Remediation
Drawing inspiration from Microsoft’s inclusive design principles, Martin advocated for proactive approaches to AI equity. The three core principles—recognize exclusion, learn from diversity, and solve for one to extend to many—offer a roadmap for developing AI systems that work for everyone.
This preventive approach matters because the alternative is costly legal remediation. Martin referenced the case of Derek Mobley versus Workday Inc., where algorithmic discrimination led to employment discrimination lawsuits. As he noted, organizations cannot simply blame the algorithm when bias occurs—they bear responsibility for the tools they choose and how they implement them.
A Personal and Professional Call to Action
Martin’s unique perspective as both a business professor and licensed clinical health psychologist brings depth to his analysis. His research background in business ethics, workplace wellness, and burnout—combined with his hands-on experience with digital mental health startups—positions him to see both the technical and human dimensions of our AI transformation.
For Chicago’s business leaders, Martin’s message is both urgent and hopeful. The urgent part: organizations must act intentionally now to ensure AI deployment enhances rather than undermines belonging and inclusion. The hopeful part: with thoughtful planning and implementation, AI can be a tool for expanding opportunity rather than restricting it.
Building Tomorrow’s Inclusive Workplaces Today
As Martin concluded his presentation, he emphasized that inclusive AI isn’t just about legal compliance or ethical responsibility—though both matter enormously. It’s about building organizations where every person can bring their authentic self to work and receive validation for their unique contributions. It’s about recognizing that engagement requires the whole person, not just convenient fragments.
The choice facing Chicago’s business community is clear. We can allow AI to amplify existing inequities and create new forms of exclusion, or we can intentionally design systems and processes that expand access and opportunity. We can treat AI as just another efficiency tool, or we can leverage it as a catalyst for creating more inclusive, human-centered workplaces.
Professor Martin’s insights remind us that the future of work isn’t predetermined. It’s being designed right now, in conference rooms and boardrooms across our city. The question isn’t whether AI will transform how we work—it’s whether that transformation will reflect our highest values and aspirations.
The table is being set for tomorrow’s workforce. The question is: who will have a seat, and who will be left standing outside, looking in?