RETHINKING ARTIFICIAL INTELLIGENCE: FROM AN ENTITY TO AN ORGANIZING CAPABILITY  

by , , | Nov 13, 2025 | Management Insights

407 views

The current predominant narrative depicts artificial intelligence (AI) as an entity that resides in algorithms. In business practice as well as in academic discourses, we often talk about deploying, installing, or integrating AI into our existing systems, treating AI as an entity. In our new article, recently published in the Journal of Management Studies (open access), we challenge this dominant view and argue that AI is an organizing capability that arises only when humans and algorithms work together in practice. This new definition changes how we think about AI in organizations, policymaking, and society: AI is not a tool we adopt, but a capability we collectively create. In this blog, we outline the implications of the capability view for managing AI. 

Why not continue with an entity view of AI? 

Most conversations about AI assume what we call an entity view: the idea that AI is a self-contained system that “lives” inside algorithms and can think and act on its own. From this perspective, adopting AI looks like buying a product, installing a tool, or onboarding a new kind of coworker on a project—something that can be introduced into the organization and expected to perform well and in line with organizational goals.  

But this very assumption helps explain why so many AI projects fail. When managers treat AI as a product to be adopted, they often underestimate the organizational work needed to make it effective. They assume success depends mainly on bringing state-of-the-art intelligent technologies to the organization, rather than focusing on how to produce, integrate, and sustain intelligence over time. This resonates with the recent MIT report (Furr & Shipilov, 2025), which highlights that 95% of enterprise generative AI pilots fail to deliver measurable returns: the entity view leads managers to expect plug-and-play results, leaving them unprepared for the changes in roles, processes, and responsibilities that actually determine whether AI delivers value. 

A capability view of AI 

Our research suggests that AI may be better understood not as an entity you “plug in”, but as an organizing capability: the ability of humans and algorithms, working together, to analyze information, learn over time, and act in ways that shape how work is done. This capability does not reside in the technology alone, but in the system of relations through which people and algorithms connect, depend on each other, and evolve.  

Three properties capture this capability: 

Connectivity. AI only exists when humans and algorithms connect. A matching algorithm at Uber, for instance, does not “decide” anything until drivers and riders actually use the app, request rides, and accept trips. 
 

Codependence. Both humans and algorithms need each other for AI to arise. Algorithms may process data at scale, but they cannot set goals or interpret context without human input. Equally, humans cannot deliver the same speed, scale, or pattern recognition without algorithms. 
 

Emergence. AI changes as these relations and entities evolve. Over time, people learn how to use algorithms differently, algorithms adapt through feedback, and the capability itself becomes different. In medical diagnostics, for example, radiologists and algorithms have learned from each other: radiologists refine how they interrogate outputs, while algorithms improve through the labeled data radiologists provide. 

Implications for practice 

AI requires connectivity, not adoption. AI does not reside in technology alone; it only emerges when humans and algorithms work together. Viewing AI as a “product” obscures the requirements for enacted relations and ongoing coordination. AI is not a product that can be purchased and “rolled out”, it is a capability that develops through cocreation. Novo Nordisk’s large-scale Copilot deployment is a good example: success required not just installing the software, but creating champion networks, investing in training tailored to different functions, and supporting employees through the inevitable dip when early excitement gave way to frustration (Wade et al., 2025). As a result, domain experts must engage as co-producers of capability, not as passive recipients of model insights. Their interpretive inputs, contextual knowledge, and iterative feedback are what make AI work. They need to decide when and how to involve algorithmic actors in solving problems.  

Managing AI means managing relations. AI is not a decision-maker on its own but a capability that depends on relations between human and algorithmic actors. This means the role of managers is not to “delegate” to AI, but to design and manage those relations. So, it is not a question of deploying AI, but of designing relations that will contribute to intelligence (see also Schrage & Kiron, 2025). Codependence implies that humans and algorithms need each other for AI to arise, and managers are the ones responsible for structuring opportunities for such encounters. Domain experts occupy the frontline of these relations. They need clarity on how their professional judgment interacts with algorithmic recommendations, how disagreements are resolved, and how accountability is shared. Recognizing that responsibility lies in the system of relations, not in individual tools, allows domain experts to negotiate oversight and control more effectively. 

AI can reshape organizational goals. AI not only helps organizations achieve existing goals, but can also change how those goals are defined. Because AI emerges through ongoing interactions, it can surface new patterns, nudge priorities, or narrow attention in ways that alter organizational goals. Emergence implies that AI changes as relations evolve, and so too does the organization’s intelligence. Therefore, domain experts need to revisit the fundamental basis of their expertise. Instead of being experts in solving given problems, domain experts may benefit from developing expertise in discovering and substantiating novel problems that could be solved in collaboration with algorithmic actors. 

Building intelligence together 

Practitioners must see themselves not only as technology adopters but as architects of organizational intelligence, responsible for shaping how human and algorithmic reasoning interact across their organizations. That means supporting co-creation, managing relations, and staying alert to how AI reshapes organizational goals. For managers, policymakers, and educators alike, the question is no longer How do we adopt AI? but How do we design the systems of relations that make AI work? 

 

Authors

  • Marta Stelmaszak

    Marta Stelmaszak is an assistant professor of information systems at the Isenberg School of Management at the University of Massachusetts Amherst, USA. She holds a Ph.D. from the London School of Economics and Political Science and an M.Sc. in data science from Birkbeck, University of London. Marta’s research interests concern digital data and their responsible, sustainable, and ethical management in organizations, including in artificial intelligence.

    View all posts
  • Mayur Joshi

    Mayur Joshi is an assistant professor of information systems at the Telfer School of Management, University of Ottawa, and holds a PhD from Ivey Business School, Western University, Canada. His research interests are at the intersection of information systems and organization theory. He explores an overarching research question on how digital technologies shape and are shaped by the fundamental practices, processes, and strategies of organizing. His recent work examines the occupation of data science and the role of artificial intelligence in transforming work and organizational decision-making.

    View all posts
  • Ioanna Constantiou

    Ioanna Constantiou is a full professor of information systems in the Department of Digitalization at the Copenhagen Business School. From 2017 to 2020, she was employed as a professor of information systems in the Department of Applied IT at the University of Gothenburg in Sweden (part-time from July 2017). She received her Ph.D. from the Department of Management Science and Technology at Athens University of Economics and Business in 2003. Her current research work focuses on the digital transformation of organizations and the impact of AI on strategy and leadership with focus on decision making and human-AI collaboration. She serves as deputy editor-in-chief for the Journal of Strategic Information Systems.

    View all posts

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Subscribe to New Post Alerts

Loading
  • Blog Tags

  • Reset Filters

Pin It on Pinterest