DNYUZ
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Music
    • Movie
    • Television
    • Theater
    • Gaming
    • Sports
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

Salesforce’s new CoAct-1 agents don’t just point and click — they write code to accomplish tasks faster and with greater success rates

August 12, 2025
in News
Salesforce’s new CoAct-1 agents don’t just point and click — they write code to accomplish tasks faster and with greater success rates
492
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter

Researchers at Salesforce and the University of Southern California have developed a new technique that gives computer-use agents the ability to execute code while navigating graphical user interfaces (GUIs), that is, writing scripts while also moving a cursor and/or clicking buttons on an application, combining the best of both approaches to speed up workflows and reduce errors.

This hybrid approach allows an agent to bypass brittle and inefficient mouse clicks for tasks that can be better accomplished through coding.

The system, called CoAct-1, sets a new state-of-the-art on key agent benchmarks, outperforming other methods while requiring significantly fewer steps to accomplish complex tasks on a computer.

This upgrade can pave the way for more robust and scalable agent automation with significant potential for real-world applications.

The fragility of point-and-click AI agents

Computer use agents typically rely on vision-language and vision-language-action models (VLMs or VLAs) to perceive a screen and take action, mimicking how a person uses a mouse and keyboard.

While these GUI-based agents can perform a variety of tasks, they often falter when faced with long, complex workflows, especially in applications with dense menus and options, like office productivity suites.

For example, a task that involves locating a specific table in a spreadsheet, filtering it, and saving it as a new file can involve a long and precise sequence of GUI manipulations.

This is where brittleness creeps in. “In these scenarios, existing agents frequently struggle with visual grounding ambiguity (e.g., distinguishing between visually similar icons or menu items) and the accumulated probability of making any single error over the long horizon,” the researchers write in their paper. “A single mis-click or misunderstood UI element can derail the entire task.”

To address these challenges, many researchers have focused on augmenting GUI agents with high-level planners.

These systems use powerful reasoning models like OpenAI’s o3 to decompose a user’s high-level goal into a sequence of smaller, more manageable subtasks.

While this structured approach improves performance, it doesn’t solve the problem of navigating menus and clicking buttons, even for operations that could be done more directly and reliably with a few lines of code.

CoAct-1: A multi-agent team for computer tasks

To solve these limitations, the researchers created CoAct-1 (Computer-using Agent with Coding as Actions), a system designed to “combine the intuitive, human-like strengths of GUI manipulation with the precision, reliability, and efficiency of direct system interaction through code.”

The system is structured as a team of three specialized agents that work together: an Orchestrator, a Programmer, and a GUI Operator.

The Orchestrator acts as the central planner or project manager. It analyzes the user’s overall goal, breaks it down into subtasks, and assigns each subtask to the best agent for the job. It can delegate backend operations like file management or data processing to the Programmer, which writes and executes Python or Bash scripts.

For frontend tasks that require clicking buttons or navigating visual interfaces, it turns to the GUI Operator, a VLM-based agent.

“This dynamic delegation allows CoAct-1 to strategically bypass inefficient GUI sequences in favor of robust, single-shot code execution where appropriate, while still leveraging visual interaction for tasks where it is indispensable,” the paper states.

The workflow is iterative. After the Programmer or GUI Operator completes a subtask, it sends a summary and a screenshot of the current system state back to the Orchestrator, which then decides the next step or concludes the task.

The Programmer agent uses an LLM to generate its code and sends commands to a code interpreter to test and refine its code over multiple rounds.

Similarly, the GUI Operator uses an action interpreter that executes its commands (e.g., mouse clicks, typing) and returns the resulting screenshot, allowing it to see the outcome of its actions. The Orchestrator makes the final decision on whether the task should continue or stop.

A more efficient path to automation

The researchers tested CoAct-1 on OSWorld, a comprehensive benchmark that includes 369 real-world tasks across browsers, IDEs, and office applications.

The results show CoAct-1 establishes a new state-of-the-art, achieving a success rate of 60.76%.

The performance gains were most significant in categories where programmatic control offers a clear advantage, such as OS-level tasks and multi-application workflows.

For instance, consider an OS-level task like finding all image files within a complex folder structure, resizing them, and then compressing the entire directory into a single archive.

A purely GUI-based agent would need to perform a long, brittle sequence of clicks and drags, opening folders, selecting files, and navigating menus, with a high chance of error at each step.

CoAct-1, by contrast, can delegate this entire workflow to its Programmer agent, which can accomplish the task with a single, robust script.

Beyond just a higher success rate, the system is dramatically more efficient. CoAct-1 solves tasks in an average of just 10.15 steps, a stark contrast to the 15.22 steps required by leading GUI-only agents like GTA-1.

While other agents like OpenAI’s CUA 4o averaged fewer steps, their overall success rate was much lower, indicating CoAct-1’s efficiency is coupled with greater effectiveness.

The researchers found a clear trend: tasks that require more actions are more likely to fail. Reducing the number of steps not only speeds up task completion but, more importantly, minimizes the opportunities for error.

Therefore, finding ways to compress multiple GUI steps into a single programmatic task can make the process both more efficient and less error-prone.

As the researchers conclude, “This efficiency underscores the potential of our approach to pave a more robust and scalable path toward generalized computer automation.”

From the lab to the enterprise workflow

The potential for this technology goes beyond general productivity. For enterprise leaders, the key lies in automating complex, multi-tool processes where full API access is a luxury, not a guarantee.

Ran Xu, a co-author of the paper and Director of Applied AI Research at Salesforce, points to customer support as a prime example.

“A service support agent uses many different tools — general tools such as Salesforce, industry-specific tools such as EPIC for healthcare, and a lot of customized tools — to investigate a customer request and formulate a response,” Xu told VentureBeat. “Some of the tools have API access while others don’t. It is a perfect use case that could potentially benefit from our technology: a compute-use agent that leverages whatever is available from the computer, whether it’s an API, code, or just the screen.”

Xu also sees high-value applications in sales, such as prospecting at scale and automating bookkeeping, and in marketing for tasks like customer segmentation and campaign asset generation.

Navigating real-world challenges and the need for human oversight

While the results on the OSWorld benchmark are strong, enterprise environments are far messier, filled with legacy software and unpredictable UIs.

This raises critical questions about robustness, security, and the need for human oversight.

A core challenge is ensuring the Orchestrator agent makes the right choice when faced with an unfamiliar application. According to Xu, the path to making agents like CoAct-1 robust for custom enterprise software involves training them with feedback in realistic, simulated environments.

The goal is to create a system where the “agent could observe how human agents work, get trained within a sandbox, and when it goes live, continue to solve tasks under the guidance and guardrail of a human agent.”

The ability for the Programmer agent to execute its own code also introduces obvious security concerns. What stops the agent from executing harmful code based on an ambiguous user request?

Xu confirms that robust containment is essential. “Access control and sandboxing is the key,” he said, emphasizing that a human must “understand the implication and give the AI access for safety.”

Sandboxing and guardrails will be critical to validating agent behavior before deployment on critical systems.

Ultimately, for the foreseeable future, overcoming ambiguity will likely require a human-in-the-loop. When asked about handling vague user queries, a concern also raised in the paper, Xu suggested a phased approach. “I see human-in-the-loop to start,” he noted.

While some tasks may eventually become fully autonomous, for high-stakes operations, human validation will remain crucial. “Some mission-critical ones may always need human approval.”

The post Salesforce’s new CoAct-1 agents don’t just point and click — they write code to accomplish tasks faster and with greater success rates appeared first on Venture Beat.

Share197Tweet123Share
Scientists Built an Artificial Tongue That Tastes and Learns Like the Real Thing
News

Scientists Built an Artificial Tongue That Tastes and Learns Like the Real Thing

by VICE
August 12, 2025

Scientists have built something nature perfected millions of years ago—a tongue that can taste and remember flavors. The world’s first ...

Read more
News

A look at how big of a hit Travis Kelce and Taylor Swift scored with their podcast teaser

August 12, 2025
Environment

Can UNESCO Accommodate Both Preservation and Human Rights?

August 12, 2025
News

Danielle Spencer Dead At 60: Tributes Pour In For ‘What’s Happening!!’ Child Star—Latest Updates

August 12, 2025
News

Judge T.S. Ellis III, 85, Dies; Stirred Outcry Over Manafort Sentence

August 12, 2025
Former Trump Cabinet Secretary Is Unlikely Advocate for Drug That Made Him See The Devil

Former Trump Cabinet Secretary Is Unlikely Advocate for Drug That Made Him See The Devil

August 12, 2025
What Lipstick Is Taylor Swift Wearing in ‘The Life of a Showgirl’ Album Announcement?

What Lipstick Is Taylor Swift Wearing in ‘The Life of a Showgirl’ Album Announcement?

August 12, 2025
The Wisconsin Woman Who Flew to Britain to Kill a Man

The Wisconsin Woman Who Flew to Britain to Kill a Man

August 12, 2025

Copyright © 2025.

No Result
View All Result
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Gaming
    • Music
    • Movie
    • Sports
    • Television
    • Theater
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel

Copyright © 2025.