Stakeholder management is difficult. Leaders walk a tightrope: They must build trust between employees, investors, partners and impacted stakeholders who have different (sometimes competing) aims and much to lose. And, in an increasingly automated age where AI and other technologies integrate into workflows, the usual ways of winning trust must adapt and evolve.
AI systems have two main implications: accelerating change and shifting decision-making power. Both of these make stakeholder management more difficult. By design, these systems perform and augment tasks traditionally handled by humans — including in assumed strongholds of human superiority, such as strategy and the arts. But that does not mean that they should remove human decision-making authority or input.
As AI systems become more complex, the importance of stakeholders’ input in decision-making will decrease unless a thoughtful design process is implemented. AI systems can create powerful momentum for businesses, but initial input influences whether a systems’ impact will be positive or negative. For instance, ChatGPT’s designers use Reinforcement Learning from Human Feedback (RLHF) to train the agent to incorporate user feedback into future behavior.
As public feedback accumulates, we will see whether the RLHF approach is effective in addressing ethical issues raised about generative AI systems.
Responsible leaders face a fundamental challenge: How do they build stakeholder inclusion and oversight into AI systems and processes? Current stakeholder engagement models exert higher leverage than they are likely to in the long term.
We propose a new way for leading organizations to design stakeholder engagement strategies that will be maximally inclusive and effective at this pivotal moment.
Introducing the ladder model of stakeholder engagement
A broad spectrum exists between engaging employees or customers as passive stakeholders or as decision-making partners. But it can be difficult to know where on that spectrum an audience is — or should be — at any given point.
A model can help us break this spectrum down into observable steps. Consider the ladder model of stakeholder engagement — first proposed by housing policy analyst Sherry Arnstein in 1969 and doubly relevant to our modern dilemma. Arnstein’s ladder was initially developed with a political lens, and we have updated it here to fit a business context.
On Arnstein’s ladder, stakeholder audiences sit with varying levels of power, from nonparticipation to shared control. At the lowest level, a misaligned decision-maker might provide minimal or inaccurate information to manipulate stakeholders or only address their emotional responses. Slightly better than this step is providing more thorough one-way flows of information or holding listening sessions with a token sample of stakeholder groups.
Left: original stakeholder ladder model from Arnstein, 1969
Right: Ladder redesigned for executives in the age of automation
As AI systems become more powerful, they also become more complex when it comes to both their technical components and their interactions with other societal structures and systems. This increased complexity can make it difficult to understand and explain an AU system’s inner workings and predict its potential effects on society.
Increasing AI capabilities can induce companies to engage stakeholders at lower levels — some intentionally and some unintentionally — so responsible stakeholder engagement will require swimming against this current.
Responsible stakeholder engagement is not only essential to inclusion; it also offers leadership accountability and reduced reputational risk. Inclusive stakeholder management attracts talent, assuages investors and boosts the trust organizations need to survive and grow.
Climbing the ladder
To design and execute a stakeholder management strategy that moves audiences up Arnstein’s ladder toward richer and more impactful engagement, leaders can assess their organizations’ current positions on the ladder and design new strategies. We recommend following these four steps to achieve richer stakeholder engagement: select, educate, evaluate, and integrate.
With input from marketing, investor relations and human resources functions, leaders can closely examine their stakeholder audiences. In preparation for an upcoming decision or roll-out, they can consider what rung of the ladder to target for each group. This will allow them to build trust, reduce uncertainty and understand potential unintended consequences.
They can consider the cognitive diversity within each audience and incorporate “invisible” stakeholders who will be critical to the organization’s long-term success. This can include future talent, local communities and the environment. The goal of this exercise is not to bring in so many perspectives that decision-making becomes impossible, but rather to make stakeholder priorities explicit and avoid undesired outcomes.
Leaders need to be thoughtful about what level of stakeholder engagement before, during, and after the launch of new systems may be feasible. For example, stakeholder engagement is essential before launching a content moderation system, but may be less impactful once the system has begun evaluating content at superhuman speed.
Stakeholder education has always been the first step toward receiving valuable input. This education becomes more important — and complex — when it comes to increasingly technology-enabled decisions. Insights from the field of behavioral design may help where clear frameworks exist that move people from “Awareness” to “Alignment” to “Action/Decision.”
Explaining increasingly complicated, decision-relevant material to highly tailored audiences sounds daunting, and if done manually, may indeed prove prohibitively labor-intensive. Within the AI research community — a notoriously fast-moving field where it’s increasingly difficult to keep up — researchers decided to try applying machine learning (ML) to the stakeholder education problem.
One promising technique is creating AI-generated newsletters summarizing the field’s latest updates. Organizations of all stripes can explore how ML can benefit stakeholder education (for example, by providing concise, timely materials that people can read comfortably and respond to from their phones). And yet, delivering increasingly complex materials to stakeholders requires designing the message itself. What do leaders tell stakeholders about their role in our decision-making? A few principles can guide us.
First, organizations can provide transparency about stakeholders’ involvement in decisions. It can be easy for stakeholders to overestimate their contribution, especially when digital tools are involved. Responsible leaders can gently highlight where individuals can provide input without overstating the impact the information will have.
Secondly, leaders are well-advised to reflect and often refer to their organization’s purpose, mission and values. This practice can prevent value drift and inhibit short-sighted stakeholder engagement tactics that might dilute the relationship over time.
Finally, carefully considering timing and approach prepares stakeholders for unexpected advances in AI capabilities. The roles that AI systems play in organizations have shifted radically over the past two years, and the public imagination struggles to keep up. Leading forecasters expect future capabilities to be unrecognizable three years from now. Leaders can invite stakeholders into the discovery process in such a rapid-fire environment and avoid setting expectations that “humans do X, AI systems only do Y.”
Instead of considering stakeholder groups as passive participants, it’s worth considering that their interests and AI system abilities are not always complementary. In the past, stakeholders have held power by providing decision-relevant information. With increasing system capabilities, they are no longer needed for the same functions and will have strong incentives to find ways to increase their relevance.
This tension makes the “selection” process of stakeholder engagement strategy delicate, as stakeholders may feel disenfranchised or eager to participate in decision-making.
Once leaders have decided which stakeholders to engage at higher rungs of the ladder, they can use a “red-teaming” approach and attempt to poke as many holes in the strategy as possible. The following questions can help.
Finding hidden risks in a stakeholder management plan:
- Who is represented and how?
- How is their feedback integrated?
- Who is missing?
- Of those missing, what are their chief concerns?
- How might this system be manipulated by those seeking power?
- How will we know if this system is being manipulated?
ChatGPT solicits user feedback via upvote/downvote, problem categorization, and an optional comment. ChatGPT uses Reinforcement Learning from Human Feedback (RLHF) to improve its performance. RLHF involves providing the system with feedback in the form of positive and negative examples, which the system uses to train its model and improve its ability to make decisions and generate outputs.
After designing a plan and putting it into action, leaders will encounter a familiar next step: Wading through the reams of input to determine which is actionable and contextual, and which may be well-meaning but out of place. As they consider the channels that stakeholders use to communicate their perspectives and how their contributions will be synthesized, organizations can balance scalable inputs (like surveys) with direct communications where individuals are encouraged to engage more freely (like conversations).
Policymakers experience the challenges of integrating large volumes of feedback daily. In 2016, the British government sought to improve stakeholder engagement on climate change and other issues. These were problems with profound effects on everyday people who were typically relegated to the bottom of the stakeholder engagement ladder and rarely given a voice. To address this gap, the British government established the Irish Citizens Assembly to solicit policy input from a small group of randomly selected citizens.
From selection to education, ensuring that the participants were demographically representative was a stakeholder management feat. But it was the integration piece that proved most challenging. Organizers struggled to cover complex topics in short amounts of time and became overwhelmed by the volume of public submissions. Then, they had to determine how much weight to give the policy reports, since the ordinary citizens hadn’t been elected.
The result, though, was worth it. The assembly strengthened public faith in democracy during a time of intense polarization and fear, and the committee’s report “shaped to a significant degree” Ireland’s groundbreaking climate action plan published shortly afterwards. It’s an encouraging story for leaders looking to improve stakeholder representation related to complex, evolving problems associated with AI.
Stepping back: Considering individual and group experience
So far, we have discussed stakeholder experience from a single perspective: That of a leader deciding how to engage them. But leaders aren’t limited to a single view, nor should they satisfy themselves with only one.
Experimenting with different lenses can offer clarity when considering how to design stakeholders’ experiences. We recommend these three.
First-order thinking: Think like a UX designer
UX designers consider the individual stakeholder in a given audience. What tools are they interacting with to share their input? What is their experience navigating these tools, and how is their attention being directed?
Second-order thinking: Think like a board game designer
Board game designers know that influencing individual behavior occurs in the context of group dynamics. How will someone’s choices affect others in their cohort? How can tools (in this case, communications strategy or tactics) be designed to facilitate cooperation toward shared goals?
Third-order thinking: Think like a macroeconomist
Macroeconomists consider the external environment. What broader political, social and demographic dynamics affect stakeholder management? This “big picture” thinking isn’t a replacement for considering individual and group experiences, but it is essential to identify factors that may have broad and lasting effects on leadership strategies.
Bringing it all together
Managing stakeholder engagement alongside increasingly powerful AI systems is like conducting an orchestra in a hurricane. As capabilities scale, more and more people will find themselves in the crosshairs of leaders’ decisions and look to be heard. At the same time, a rapidly changing environment will push leaders to “decide first, explain later.” Organizations must take action to combat these forces if they want to ensure that their stakeholder engagement remains meaningful and effective.
Before every system launch or step-change in capabilities, return to the ladder model with your goals in mind. Who will be a part of this decision, and to what extent? Who will be left behind? How can your organization’s core purpose — its song, so to speak — come through loud and clear?
Acclaimed composer Darko Butorac says it best. “When conducting, your job is to create the illusion that your choices are true — to bring freshness to works that have been played thousands or millions of times and make it sound like an entirely new experience…You’re working with human beings, 80 to 100 musicians in an orchestra. You have to acknowledge their expertise, their passion, and their desire, and the audience is incredibly perceptive if something is clicking or not. Not just playing together but breathing music together.”
By Abhishek Gupta, Steven Mills, Kes Sampanthar and Emily Dardaman.
The post Getting stakeholder engagement right in responsible AI appeared first on Venture Beat.