Key Takeaways
- 83 percent of business leaders believe psychological safety directly impacts the success of enterprise AI initiatives
- A culture of psychological safety is essential for scaling AI initiatives across global organizations
- Fear of failure, unclear communication, and limited leadership openness are significant barriers to AI adoption
- Creating psychological safety requires explicit messaging about AI’s realistic capabilities, limits, and approved use cases
- Prioritizing psychological safety can create trust, resilience, and openness, unlocking the full potential of AI
Introduction to Psychological Safety in AI Adoption
A new report by Infosys and MIT Technology Review Insights highlights the critical role of psychological safety in driving AI initiatives. The report reveals that 83 percent of business leaders believe psychological safety directly impacts the success of enterprise AI initiatives. This finding suggests that creating a culture of psychological safety is essential for scaling AI initiatives across global organizations. Psychological safety refers to the ability of employees to feel comfortable taking risks, experimenting, and sharing their ideas without fear of backlash or negative consequences.
The Impact of Psychological Safety on AI Success
The report shows that a culture of psychological safety has a significant impact on the success of AI projects. More than four out of five (83 percent) respondents say psychological safety has a measurable impact on the success of AI initiatives, and 84 percent report direct links between psychological safety and tangible business outcomes. This finding underscores the importance of prioritizing psychological safety in AI adoption. By creating a culture of trust and openness, organizations can empower employees to experiment, innovate, and drive business outcomes. However, the report also notes that achieving psychological safety is a moving target, with fewer than half (39 percent) of respondents describing their current level of psychological safety as "high."
Barriers to AI Adoption
Despite rapid advances in AI technology, the report finds that human factors are holding enterprises back. Fear of failure, unclear communication, and limited leadership openness are significant barriers to AI adoption. Nearly one-quarter (22 percent) of respondents admit they have hesitated to lead or suggest an AI project because of fear of failure or potential criticism. This fear can undermine innovation and prevent employees from fully engaging with AI initiatives. Furthermore, the report highlights that organizations may have the tools and strategies in place, but without psychological safety, adoption falters.
Creating Psychological Safety
Creating psychological safety takes more than good intentions or HR policies. It requires explicit messaging about AI’s realistic capabilities, limits, and approved use cases. Clear communication and ongoing dialogue help companies prioritize transparency, ethics, and stakeholder engagement. The report notes that 60 percent of respondents say clarity on how AI will – and won’t – impact jobs would improve psychological safety the most, while just over half (51 percent) highlight leadership modeling openness to questions, dissent, and failure as equally important. By prioritizing psychological safety, organizations can create a culture of trust, resilience, and openness, unlocking the full potential of AI.
The Role of Leadership in Psychological Safety
Leadership behaviors are critical levers in creating psychological safety. The report notes that leaders who communicate clearly about AI’s impact and model openness to questions and dissent create the conditions for innovation. Without that foundation of trust, even the most advanced AI strategies will falter. Rafee Tarafdar, Chief Technology Officer, Infosys, said, "We’ve observed that the most successful enterprise AI transformations happen in organizations that foster psychological safety. When employees feel empowered to experiment without fear of failure, innovation thrives." Sushanth Tharappan, Executive Vice President – HR, Infosys, added, "At Infosys, we’ve built a culture of innovation where employees are constantly looking for new opportunities to innovate with AI. We’ve seen firsthand how psychological safety accelerates adoption and when employees have safe spaces to experiment and reimagine roles, it streamlines the technological aspect."
Conclusion
The report underscores that AI transformation is not only a technological journey but also a cultural one. By prioritizing psychological safety, enterprises can create trust, resilience, and openness, which are needed to unlock the full potential of AI. As Laurel Ruma, Global Editorial Director, MIT Technology Review Insights, said, "Our research, in collaboration with Infosys, shows that psychological safety is not a soft metric, it is a measurable driver of AI outcomes." The report highlights the importance of creating a culture of psychological safety, where employees feel comfortable taking risks, experimenting, and sharing their ideas without fear of backlash or negative consequences. By doing so, organizations can drive innovation, unlock the full potential of AI, and achieve meaningful business outcomes.


