Recent advances in AI technologies and foundational models – such as OpenAI’s GPT-X and ChatGPT – have made new inroads into reasoning about complex cases and generating answers, poems, images, and essays. These applications of generative AI have achieved mainstream popularity that has surprised even their developers. This excitement has provoked more predictions about how these technologies might take on the work of professions. For example, GPT-X can effectively summarize bodies of knowledge, pass admission exams, craft persuasive political messages, and write reasonable academic peer-reviews. The question is – will there be demand for the professions in the future?
To answer this question, researchers have adjusted prior forecasts of the effects of AI on occupations with the new capabilities of generative AI (large language models). Researchers predict that professions like teachers, sociologists, political scientists, judges, and arbitrators are at the highest risk of substitution or augmentation by AI technologies. But is this an appropriate or accurate way to think about the work of professions and the changes that AI technologies might bring along?
We argue it is not. In our article Relational Expertise: What Machines Can’t Know published in Journal of Management Studies, we examine the assumptions underlying forecasts that predict that AI technologies threaten to take on the work of professions. These predictions and forecasts conceptualize expertise – skills, knowledge, and abilities – as self-contained substances, mental achievements, and abstract capacities that are indifferent to the context and circumstances of their application. On the contrary, empirical studies of professions show that expertise is more accurately described as relationally generated, applied, and recognized. By the synthesis of these studies, we articulate a relational perspective to expertise that highlights how expertise is produced, used, and appreciated relationally in interactions, ties, and links.
First, expertise is generated within and about a certain set of actors and technologies – what we refer to as assemblages. For example, lawyers rely on not only their legal expertise but also their ability to navigate courts. Second, expertise is applied in interaction with clients, patients, and audiences. For example, weather scientists anticipate the needs of their clients in crafting the representations of their knowledge, and puppeteers adjust their expertise across different audiences of stage and screen performances. Third, expertise needs recognition – it must be made visible and authoritative to be effective. Without the recognition of their expertise, we would not trust and rely on experts. For example, we trust our primary care physicians because they are licensed and thus carry institutionalized markers of their expertise.
The relational constitution of expertise creates issues of opacity, translation, and accountability for those wishing to deploy AI technologies in the context of professional work. Relational expertise is opaque to the developers of AI technologies and will not show up in the training datasets. Any recommendations or advice produced by AI technologies require translation by those with relational expertise to make the recommendations meaningful and useful. Further, professions are still held accountable for their advice normatively and socially. Relational expertise presents a significant challenge to AI technologies.
The relational constitution of expertise – skills, knowledge, and abilities – should change how we think about AI technologies in relation to professions and their work. Above all, this perspective questions the notion that professions could be augmented with, subordinated to, or dismantled by these technologies. We identify three challenges for the efforts to develop and deploy AI technologies in the context of professional work. First, relational expertise resides in ties and links among actors, making it difficult to observe, capture, and replicate. Second, when these technologies are implemented in organizations, generated outcomes need to be adjusted to customers, clients, and users’ needs, interests, and sensibilities. Third, society still trusts the members of professions and holds them responsible, not technologies.
Professions will continue to evolve. AI technologies are likely to become integrated into the webs of interaction through which expertise is generated, applied, and recognized. However, members of a profession have an active role to play in the development and deployment of AI technologies in their domain. Professions, because they understand the role and value of relational expertise in their work, should take the lead in domesticating technologies in their work. This is likely to require members of professions to develop new expertise and new roles.
***
Photo credit:
“The Future of Expertise in the Age of AI” by DALL·E (OpenAI). The noticeably gendered interpretation of AI and expertise reflects the model and training data of generative AI and not authors’ views.”
This article on the future of professions and the limitations of AI is thought-provoking. It raises important questions about the role of technology and human expertise in our rapidly changing world. As we continue to innovate and automate, it’s crucial to consider what makes us uniquely human and how we can leverage those qualities to stay relevant in the workforce.