What did you study, and why is that important?
Do you want your company to adopt and fully leverage AI tools and stay ahead of competitors? The answer is almost always yes. But the real question is: how can you, as a leader, ensure that your managers and employees actually use AI effectively and your company benefits from it? Indeed, many companies are making massive investments in AI to augment and automate decision-making and other vital organizational processes. Yet, despite these efforts, AI adoption failure rates remain staggeringly high—up to 80% (Bojinov, 2023). So why do so many companies struggle to unlock AI’s actual value?
Our study, recently published in the Journal of Management Studies, reveals that AI adoption is not just a technical challenge—it’s a trust and leadership challenge. Simply convincing managers and employees that AI performs well (cognitive trust) is not enough. Successful adoption requires fostering both cognitive trust and emotional trust (a positive and sometimes irrational feeling toward AI) together. Ignoring either type of trust leads to resistance, misuse, or disengagement, ultimately undermining AI’s potential.
Most importantly, we argue that a one-size-fits-all approach to AI adoption doesn’t work. Leaders must take a personalized approach—tailoring training, communication, and leadership strategies to different trust configurations among employees. Aligning emotional and cognitive trust is the key to transforming AI from a costly experiment into a competitive advantage.
How we study it
We conducted a real-world longitudinal study at a leading Scandinavian software development firm—a market leader in digital and AI-driven solutions. At the heart of our study was AI-Tool, an advanced technology powered by an embedded algorithm that collects and analyzes internal company data. By tracking employees’ digital footprints—such as calendar entries, search keywords, and internal communication discussions—AI-Tool generates a visual expertise map, offering insights into the company’s collective knowledge. But implementing AI isn’t just about technology—it’s about how people interact with it. Our study went beyond tracking AI adoption to deeply examine how employees responded to AI-Tool. To do this, we gathered data from two primary sources:
- Interviews with organizational members, capturing their experiences and perceptions.
- Comprehensive company material, including:
- Interviews conducted by the AI-Tool development team with employees.
- AI usage statistics, revealing engagement patterns and adoption trends.
Our study provides critical insights into the challenges and opportunities of AI adoption—helping companies understand how to effectively integrate AI into their workforce and maximize its potential.
Key findings
- New AI technology can evoke four distinct trust configurations among organizational members:
- full trust (high cognitive/high emotional),
- full distrust (low cognitive/low emotional),
- uncomfortable trust (high cognitive/low emotional),
- blind trust (low cognitive/high emotional).
- Organizational members exhibit distinct behaviours under each of the four trust configurations:
- detailing their digital footprints (clarifying, specifying, and detailing information in the digital world)
- manipulating their digital footprints (‘feeding’ the AI with information that is not entirely true)
- confining their digital footprints (conscious and thoughtful disclosing certain limited pieces of personal information and habits)
- withdrawing their digital footprints (prohibiting collecting data from one’s own digital footprint)
- These behaviours trigger a “vicious cycle,” where biased (due to manipulation) and unbalanced and asymmetric (due to detailing, confining, or withdrawing) data inputs degrade AI performance, further eroding trust and stalling adoption.
Key Takeaways and Recommendations
- Leaders must develop a tailored, people-centered strategy for AI adoption by
- Tailoring communication to each trust configuration
Leaders should adapt messaging based on employees’ trust configurations. For those with full trust, reinforce responsible AI use. For full distrust, provide transparent explanations and real-world benefits. Employees with uncomfortable trust need reassurance through practical demonstrations, while those with blind trust require awareness of AI’s limitations to prevent overreliance.
- Designing personalized training and support based on each trust configuration
A one-size-fits-all training approach won’t work. Leaders should develop personalized training programs that address both cognitive (rational understanding) and emotional (confidence and security) trust.
- Two actions that leaders can take to increase the cognitive dimension of trust
- Provide Comprehensive AI Training Programs
- Organize regular workshops and seminars to educate employees about:
- How the AI tool works (data input, algorithms, and decision-making).
- The capabilities and limitations of the AI tool
- Real-world use cases and examples of AI-driven decision-making.
- Include hands-on sessions where employees interact with the AI tool and observe its functioning.
- Distribute clear, accessible materials (infographics, videos, FAQs) summarizing key AI concepts.
- Manage Expectations Regarding AI Performance
- Set realistic timelines for AI performance improvements, acknowledging that AI outcomes improve as it learns from data over time.
- Regularly communicate progress updates to employees, showcasing how the AI tool is enhancing workflows and decision-making.
- Celebrate AI-driven milestones and improvements, highlighting tangible benefits and success stories to boost morale and trust.
- Three actions to increase the emotional dimension of trust
- Leaders’ positive interaction with AI tools
- Organize “AI Discovery Sessions” where leaders share their positive experiences and pride in AI-driven accomplishments.
- Use social media, internal newsletters, and team meetings to communicate leaders’ enthusiasm and confidence in AI.
- Cultivating a psychologically safe environment
- Set up dedicated forums and town halls where employees can share concerns, ask questions, and express their feelings about AI.
- Implement anonymous feedback channels where employees can share thoughts about AI without fear of repercussions.
- Train managers in emotional intelligence to address anxieties constructively.
- Ethical use and data privacy commitment
- Ensure adherence to strict data privacy and ethical guidelines by setting clear company-wide standards for data collection and consent.
- Provide regular workshops on data ethics and responsible AI use, emphasizing consent, transparency, and fair practices.
- Publicly communicate the company’s commitment to honouring data privacy, transparency, and ethical AI use.
Who should read this article and why
This study is essential for any leader looking to successfully navigate AI adoption, avoid common pitfalls, and build a future-ready workforce. Are you ready to unlock AI’s full potential in your organization? This study provides the critical insights and strategies you need to succeed.
0 Comments