DNYUZ
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Music
    • Movie
    • Television
    • Theater
    • Gaming
    • Sports
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel
No Result
View All Result
DNYUZ
No Result
View All Result
Home News Education

10 New AI Challenges—and How to Meet Them

May 22, 2025
in Education, News, Science
10 New AI Challenges—and How to Meet Them
493
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter

With a new record set for the number of executive orders (EOs) by any U.S. president in their first 100 days, you could be forgiven for missing EO 14277, titled “Advancing Artificial Intelligence Education for American Youth,” that came right after the EO that restores U.S. seafood competitiveness. In the deluge of President Trump’s EOs, 14277 is 34 orders away from the one aimed at closing the Department of Education—the very department assigned to do the heavy lifting of advancing AI education at the K-12 level. Fortunately, students might already be adept at how to use AI tools. Where deeper education may be needed is in building awareness of how AI is posing “new challenges for the defense of human dignity, justice, and labor,” as Pope Leo XIV has noted in his concerns.

To relieve the soon-to-be-shuttered Department of Education of the immense burden of teaching about AI’s “new challenges,” I have assembled a 10-point lesson plan and, for each point, one potential remedy.

The end of the world is closer than you think: As recently as March 2023, many of the world’s top AI experts were calling for a pause in AI development. Their worries had to do with “profound risks to society and humanity.” In the largest survey of AI and machine-learning experts, 48 percent of those who considered themselves “AI net optimists” put a 5 percent probability on human extinction due to AI—an unacceptably high risk by any measure. While one could dismiss such worries as overblown, the odds of a bad outcome have now become worse: The “AI doomers” from March 2023 have mostly self-silenced despite having no objective proof that AI has become any safer. In fact, it might have become more dangerous, with emergent innovations, such as AI agents that can make independent decisions, perhaps autonomously triggering a confrontation; hopes of AI safety regulations diminishing under the Trump administration as it prioritizes accelerating AI with minimal friction; and U.S. rhetoric against its prime AI-related antagonist, China, becoming even more strident.

Therefore, it’s time to institute a “kill switch” as an industry-wide standard, along the lines of the May 2024 commitment in Seoul, where major companies developing AI volunteered to steer AI away from automated attacks and allow systems to auto-shut down in the case of a catastrophe.

Prepare for a persistent AI trust gap: While AI models and tools get better, users’ trust in AI companies have fallen, creating an optimism gap between developers and regular users. The distrust draws from many reasons.

First, large language models often “hallucinate” and return with false, often bizarre, information. The hallucination rate of major AI models ranged from 0.7 to 29.9 percent on April 29, according to a “hallucination leaderboard.” The problems get worse when AI uses “reasoning systems.”

Second, AI is prone to reinforcing preexisting biases: Image generators play back racial and gender stereotypes, and predictive algorithms may raise barriers to those already disadvantaged.

Third, AI models aren’t transparent about how algorithms are developed, the source and quality of training data, or the makeup of the development teams. Medical imaging presents a compelling illustration of a dilemma because of this “black box” feature: While neural networks are good at detecting disease markers in patient scans, they may fail to offer a clear reasoning.

Fourth, AI’s black box nature also heightens worries about personal data protection, privacy, and whether there are ethical frameworks for safeguarding individuals. AI-aided facial recognition technology and electronic surveillance, or “digital prisons,” have undermined human rights.

One remedy is AI developers can invest in technologies that improve the trustworthiness of their products, such as data augmentation techniques, building feedback loops, and using metrics to test for bias in training datasets, such as disparate impact to measure outcome disparity between groups or equalized odds to ensure that a model’s predictions are equally accurate across different protected groups.

Anticipate uneven adoption: While ChatGPT famously broke historical records for technology adoption, paid AI’s adoption has been uneven despite the buzz and apparent benefits. Ramp, a corporate expense management company, found that while AI spending has grown, only about a third of companies have paid for an AI tool. Unlike the internet or social media, which are freely available in exchange for data and advertising exposure, the pricing of AI tools is likely to be different and will lead to different levels of access. The persistence of an AI trust gap will reinforce uneven usage and competence, possibly leading to AI haves and have-nots.

To get around this, employers can train workers to detect unreliable AI-generated information and become better users of the technology, thereby increasing their comfort level with using it.

Don’t assume a productivity revolution: According to McKinsey, generative AI could boost labor productivity growth by 0.1–0.6 percent annually through 2040. Combining it with all other technologies could add 0.5–3.4 percentage points annually to productivity growth. To consider how these forecasts might play out, it is natural to look at AI’s predecessors—the internet and related digital technologies—for comparison. Unfortunately, this does not suggest a productivity revolution is forthcoming; U.S. worker productivity growth fell when early digital technologies were introduced.

Even worse, Oxford scholar Jean-Paul Carvalho has argued that outsourcing various cognitive tasks to AI could destroy human productivity. Of the American teens surveyed, seven out of 10 have used a generative AI tool and 53 percent used it for homework assistance, making teachers worried about a decline in critical thinking and a deterioration of cognitive skills.

Teachers can integrate the technology into their pedagogy by actively using AI to build curiosity and critical thinking. A World Bank study of a teacher-guided GPT-4 tutor in an afterschool program in Nigeria found that using an AI tutor equated to “1.5 to 2 years of ‘business-as-usual’ schooling.”

The AI engine is running low on gas: The internet itself could be too small to provide the data needed to sustain the needs of large language models: even after using all the high-quality language and image data available there could be a shortfall of at least 10 trillion to 20 trillion tokens to train the next generation of GPT, as training data supplies have been running out. Even with the data currently used, there are pending questions and lawsuits about intellectual property rights.

There are, however, new sources and training processes that models can turn to—synthetic data generated by AI, datasets that dive into esoteric subjects and explore the specialist literature more in depth, and rereading and self-reflection of existing datasets, for example. Such techniques must be applied with care: when models are retrained on data generated by AI itself, for example, they must be mindful to avoid “model collapse,” where the quality of the information degrades as each iteration of training relies on earlier versions of itself.

The AI industry risks being disrupted: There have been unprecedented levels of investments in the technology’s development, with an estimated $230 billion in 2024 and even more this year. At the same time, the proprietary AI models on which much of the money was spent risk becoming undifferentiated. Meanwhile, challengers with far fewer resources, such as DeepSeek, have achieved similar performance, triggering multiple cycles of disruption in the industry, where a lower cost competitor steadily takes over the market with good enough performance and leaves incumbents with stranded assets, including over-investment in infrastructure and upended business models.

As I have suggested in an earlier article for Foreign Policy, at least one major player can signal a stop to the escalation and deploy technology that is “good enough.” That is, aimed not at the hardest number-crunching problem that it can solve but the breadth of problems that it can solve for the largest amount of people.

Expect fragmented AI: The geography of AI development is already splintered with the divergence between American AI and Chinese AI. All other countries were separated into three tiers during the Biden administration’s waning days, limiting their access to cutting-edge American AI chips. While the Trump administration plans to scrap the tiered export controls, there’s speculation that the tiers will be replaced by bilateral negotiations with individual countries, which would create greater fragmentation. Meanwhile, countries are pushing forward with sovereign AI initiatives to further their national competitiveness and security objectives.

While multilateralism hasn’t always delivered on wider issues, pursuing it is still vital in the context of an emergent multi-purpose technology, such as AI. The Council of Europe’s Framework Convention on AI and Human Rights, Democracy, and the Rule of Law, adopted in May 2024, is the first legally binding international treaty on AI. It aims to establish common standards for AI governance and could be a starting point for further global coordination.

Expect more income inequality: AI will, no doubt, affect how work gets done in the future. Approximately 60 percent of jobs in the developed world have been exposed to AI, as well as 40 percent and 26 percent in emerging markets and in low-income countries, respectively. While not all jobs exposed will be displaced, there’s little doubt that many employers view AI as a way to trim their workforce. According to Nobel laureate and professor Daron Acemoglu, even with 5 percent of all tasks profitably performed by AI in the next decade, the technology will usher in a new wave of income inequality. Analysts expect that higher income workers will be the greatest beneficiaries of AI and will experience a rise in wages and those in more routine knowledge roles will experience a loss in income.

Acemoglu’s Massachusetts Institute of Technology colleague, David Autor, suggested that AI can narrow income gaps by giving access to skills and new capabilities to those who are less paid. Organizations such as AI4ALL and the AI Skills Coalition, which offer hands-on experience in AI tools to a wide variety of users, along with Google’s AI Opportunity Fund, Microsoft’s AI skills for nonprofits collection, and Mastercard’s competitions and awards, can help make AI use more inclusive and narrow income gaps in certain areas.

Watch out for the environmental trade-offs: AI acceleration is driving up energy consumption, contributing to global energy poverty, and leading to negative climate impacts. Data center energy consumption is expected to grow from 35 to 128 percent by 2026, and even with the anticipated investments in new energy infrastructure, demand will continue to remain ahead of supply increases. The data centers are also water-intensive, ranking among the top 10 water-consuming commercial industries in the United States. While AI itself can help by informing efficient energy use, these benefits aren’t evenly distributed since data on energy consumption is scarce among poorer populations.

Still there’s hope in ongoing innovation. AI development and use is becoming more energy efficient: Nvidia is developing more efficient GPUs that deliver 30 times in performance while using 25 times less energy, while Google’s DeepMind AI can cut cooling expenses in data centers by up to 40 percent.

While we wait for artificial general intelligence, there are immediate opportunities to deploy artificial specific intelligence: The most advanced R&D in AI is aiming for artificial general intelligence (AGI), where AI achieves and exceeds human-level capabilities across all cognitive domains. AI company leaders expect AGI to be here in two–three years or five–10 years, while external analysts are more skeptical of the timeline.

In the meantime, there are immediate applications for AI, especially in the developing world and in low-productivity sectors, such as agriculture, health care, education, and finance, which have the largest impact on most people’s lives. Here, even a small injection of readily available AI information can usher in large swings of efficiency and added value. Such “small AI” is already here—AI-aided health care for diabetics in Mexico or forest-monitoring systems in Brazil, for example. We don’t need to wait for game-changing discoveries—what’s missing is global attention and resources directed towards the many unmet needs that can be fulfilled with rudimentary AI.

The post 10 New AI Challenges—and How to Meet Them appeared first on Foreign Policy.

Tags: AIScience and Technology
Share197Tweet123Share
White Afrikaners Are Trump’s Kind of Oppressed Minority
News

A South African Grift Lands in the Oval Office

by New York Times
May 22, 2025

In March, a South African lobbyist for white rights named Ernst Roets appeared across from Tucker Carlson in an episode ...

Read more
News

Fatal shooting of Israeli embassy workers in DC sparks outrage from Trump, Israeli president

May 22, 2025
News

Trump condemns deadly shooting of 2 Israeli embassy staffers near DC’s Capital Jewish Museum

May 22, 2025
News

MSNBC’s Glaude: Trump Pushing ‘White Nationalist Agenda’ to Hate Filled Base

May 22, 2025
News

Naked man terrorizing Baldwin Park neighborhood

May 22, 2025
World leaders have a huge new problem: Trump’s Oval Office smackdowns

World leaders have a huge new problem: Trump’s Oval Office smackdowns

May 22, 2025
Philippine president calls for all Cabinet secretaries to resign after election setbacks

Philippine president calls for all Cabinet secretaries to resign after election setbacks

May 22, 2025
Kermit the Frog to deliver commencement address at the University of Maryland graduation

Kermit the Frog to deliver commencement address at the University of Maryland graduation

May 22, 2025

Copyright © 2025.

No Result
View All Result
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Gaming
    • Music
    • Movie
    • Sports
    • Television
    • Theater
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel

Copyright © 2025.