As AI pervades every sphere of modern life, the central challenge facing business leaders, policymakers and innovators is no longer whether to adopt intelligent systems but how. In a world marked by escalating polarization, resource depletion, eroding trust in institutions and volatile information landscapes, the critical imperative is to engineer AI so that it contributes meaningfully and sustainably to human and planetary well-being.
Prosocial AI — a framework of design, deployment and governance principles that ensure AI is thoughtfully tailored, trained, tested and targeted to uplift people and the planet — is more than a moral stance or PR veneer. It is a strategic approach to positioning AI within a broader ecology of intelligence that values collective flourishing over narrow optimization.
The ABCD of AI’s potential: From gloom to glory
The rationale for prosocial AI emerges from four intertwined realms — agency, bonding, climate and division (ABCD). Each domain highlights the dual character of AI: It can either intensify existing dysfunctions or act as a catalyst for regenerative, inclusive solutions.
- Agency: Too often, AI-driven platforms rely on addictive loops and opaque recommender systems that erode user autonomy. Prosocial AI, by contrast, can activate agency by revealing the provenance of its suggestions, offering meaningful user controls and respecting the multifaceted nature of human decision-making. It is not merely about “consent” or “transparency” as abstract buzzwords; it is about designing AI interactions that acknowledge human complexity — the interplay of cognition, emotion, bodily experience and social context — and enabling individuals to navigate their digital environments without succumbing to manipulation or distraction.
- Bonding: Digital technologies can either fracture societies into echo chambers or serve as bridges that connect diverse people and ideas. Prosocial AI applies nuanced linguistic and cultural models to identify shared interests, highlight constructive contributions and foster empathy across boundaries. Instead of fueling outrage for attention, it helps participants discover complementary perspectives, strengthening communal bonds and reinforcing the delicate social fabrics that hold societies together.
- Climate: AI’s relationship with the environment is fraught with tension. AI can optimize supply chains, enhance climate modeling and support environmental stewardship. However, the computational intensity of training large models often entails a considerable carbon footprint. A prosocial lens demands designs that balance these gains against ecological costs — adopting energy-efficient architectures, transparent lifecycle assessments and ecologically sensitive data practices. Rather than treat the planet as an afterthought, prosocial AI anchors climate considerations as a cardinal priority: AI must not only advise on sustainability but must be sustainable.
- Division: The misinformation cascades and ideological rifts that define our era are not an inevitable byproduct of technology, but a result of design choices that privilege virality over veracity. Prosocial AI counters this by embedding cultural and historical literacy into its processes, respecting contextual differences and providing fact-checking mechanisms that enhance trust. Rather than homogenizing knowledge or imposing top-down narratives, it nurtures informed pluralism, making digital spaces more navigable, credible and inclusive.
Double literacy: Integrating AI and NI
Realizing this vision depends on cultivating what we might call “double literacy.” On one side is AI literacy: mastering the technical intricacies of algorithms, understanding how biases emerge from data and establishing rigorous accountability and oversight mechanisms. On the other side is natural intelligence (NI) literacy: A comprehensive, embodied understanding of human cognition and emotion (brain and body), personal identity (self) and cultural embeddedness (society).
This NI literacy is not a soft skill set perched on the margins of innovation; it is fundamental. Human intelligence is shaped by neurobiology, physiology, interoception, cultural narratives and community ethics — an intricate tapestry that transcends reductive notions of “rational actors.” By bringing NI literacy into dialogue with AI literacy, developers, decision-makers and regulators can ensure that digital architectures honor our multidimensional human reality. This holistic approach fosters systems that are ethically sound, context-sensitive and capable of complementing rather than constraining human capacities.
AI and NI in synergy: Prosocial AI goes beyond zero-sum thinking
The popular imagination often pits machines against humans in a zero-sum contest. Prosocial AI challenges this dichotomy. Consider the beauty of complementarity in healthcare: AI excels at pattern recognition, sifting through vast troves of medical images to detect anomalies that might elude human specialists. Physicians, in turn, draw on their embodied cognition and moral instincts to interpret results, communicate complex information and consider each patient’s broader life context. The outcome is not simply more efficient diagnostics; it is more humane, patient-centered care. Similar paradigms can transform law, finance, governance and education decision-making.
By integrating the precision of AI with the nuanced judgment of human experts, we might transition from hierarchical command-and-control models to collaborative intelligence ecosystems. Here, machines handle complexity at scale and humans provide the moral vision and cultural fluency necessary to ensure that these systems serve authentic public interests.
Building a prosocial infrastructure
To embed prosocial AI at the core of our future, we need a concerted effort across all sectors:
Industry and tech companies: Innovators can prioritize “human-in-the-loop” designs and explicitly reward metrics tied to well-being rather than engagement at any cost. Instead of designing AI to hook users, they can build systems that inform, empower and uplift — measured by improvements in health outcomes, educational attainment, environmental sustainability or social cohesion.
Example: The Partnership on AI provides frameworks for prosocial innovation, helping guide developers toward responsible practices.
Civil society and NGOs: Community groups and advocacy organizations can guide the development and deployment of AI, testing new tools in real-world contexts. They can bring ethnically, linguistically and culturally diverse perspectives to the design table, ensuring that the resulting AI systems serve a broad range of human experiences and needs.
Educational Institutions: Schools and universities should integrate double literacy into their curricula while reinforcing critical thinking, ethics and cultural studies. By nurturing AI and NI literacy, educational bodies can help ensure that future generations are skilled in machine learning (ML) and deeply grounded in human values.
Example: The MIT Schwarzman College of Computing and Stanford’s Institute for Human-Centered AI exemplify transdisciplinary approaches that unite technical rigor with human inquiry.
Governments and policymakers: Legislation and regulatory frameworks can incentivize prosocial innovation, making it economically viable for companies to produce AI systems that are transparent, accountable and aligned with social goals. Citizen assemblies and public consultations can inform these policies, ensuring that the direction of AI reflects society’s diverse voices.
Beyond boxes to a holistic hybrid future
As AI integrates deeply into the global socio-economic fabric, we must resist the impulse to treat technology as a black box optimized for specific metrics. Instead, we can envision a hybrid future where human and machine intelligences co-evolve, guided by shared principles and grounded in a holistic understanding of ourselves and our environments. Prosocial AI moves beyond a simplistic choice between innovation and responsibility. It offers a richer tapestry of possibilities, where AI empowers rather than addicts, connects rather than fragments and regenerates rather than depletes.
The future of AI will not be determined solely by computational prowess or algorithmic cunning. How we organically weave these capabilities into the human sphere will define it, acknowledging the interplay of brain and body, self and society, local nuance and planetary imperatives. In doing so, we create a more expansive standard of success: One measured not only by profit or efficiency but by the flourishing of people and the planet’s resilience.
Prosocial AI can serve along that path. The future starts now, with a new ABCD: Aspire to an inclusive society; Believe that you are part in making it happen; Choose which side of history you want to be on; and Do what you feel is right.
Following two decades with UNICEF and the publication of various books, Dr. Cornelia C. Walther is presently a senior fellow at the University of Pennsylvania working on ProSocial AI.
The post Why ‘prosocial AI’ must be the framework for designing, deploying and governing AI appeared first on Venture Beat.