DNYUZ
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Music
    • Movie
    • Television
    • Theater
    • Gaming
    • Sports
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel
No Result
View All Result
DNYUZ
No Result
View All Result
Home News

How AI ‘digital minds’ startup Delphi stopped drowning in user data and scaled up with Pinecone

August 21, 2025
in News
How AI ‘digital minds’ startup Delphi stopped drowning in user data and scaled up with Pinecone
492
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter

Delphi, a two-year-old San Francisco AI startup named after the Ancient Greek oracle, was facing a thoroughly 21st-century problem: its “Digital Minds”— interactive, personalized chatbots modeled after an end-user and meant to channel their voice based on their writings, recordings, and other media — were drowning in data.

Each Delphi can draw from any number of books, social feeds, or course materials to respond in context, making each interaction feel like a direct conversation. Creators, coaches, artists and experts were already using them to share insights and engage audiences.

But each new upload of podcasts, PDFs or social posts to a Delphi added complexity to the company’s underlying systems. Keeping these AI alter egos responsive in real time without breaking the system was becoming harder by the week.

Thankfully, Dephi found a solution to its scaling woes using managed vector database darling Pinecone.

Open source only goes so far

Delphi’s early experiments relied on open-source vector stores. Those systems quickly buckled under the company’s needs. Indexes ballooned in size, slowing searches and complicating scale.

Latency spikes during live events or sudden content uploads risked degrading the conversational flow.

Worse, Delphi’s small but growing engineering team found itself spending weeks tuning indexes and managing sharding logic instead of building product features.

Pinecone’s fully managed vector database, with SOC 2 compliance, encryption, and built-in namespace isolation, turned out to be a better path.

Each Digital Mind now has its own namespace within Pinecone. This ensures privacy and compliance, and narrows the search surface area when retrieving knowledge from its repository of user-uploaded data, improving performance.

A creator’s data can be deleted with a single API call. Retrievals consistently come back in under 100 milliseconds at the 95th percentile, accounting for less than 30 percent of Delphi’s strict one-second end-to-end latency target.

“With Pinecone, we don’t have to think about whether it will work,” said Samuel Spelsberg, co-founder and CTO of Delphi, in a recent interview. “That frees our engineering team to focus on application performance and product features rather than semantic similarity infrastructure.”

The architecture behind the scale

At the heart of Delphi’s system is a retrieval-augmented generation (RAG) pipeline. Content is ingested, cleaned, and chunked; then embedded using models from OpenAI, Anthropic, or Delphi’s own stack.

Those embeddings are stored in Pinecone under the correct namespace. At query time, Pinecone retrieves the most relevant vectors in milliseconds, which are then fed to a large language model to produce responses, a popular technique known through the AI industry as retrieval augmented generation (RAG).

This design allows Delphi to maintain real-time conversations without overwhelming system budgets.

As Jeffrey Zhu, VP of Product at Pinecone, explained, a key innovation was moving away from traditional node-based vector databases to an object-storage-first approach.

Instead of keeping all data in memory, Pinecone dynamically loads vectors when needed and offloads idle ones.

“That really aligns with Delphi’s usage patterns,” Zhu said. “Digital Minds are invoked in bursts, not constantly. By decoupling storage and compute, we reduce costs while enabling horizontal scalability.”

Pinecone also automatically tunes algorithms depending on namespace size. Smaller Delphis may only store a few thousand vectors; others contain millions, derived from creators with decades of archives.

Pinecone adaptively applies the best indexing approach in each case. As Zhu put it, “We don’t want our customers to have to choose between algorithms or wonder about recall. We handle that under the hood.”

Variance among creators

Not every Digital Mind looks the same. Some creators upload relatively small datasets — social media feeds, essays, or course materials — amounting to tens of thousands of words.

Others go far deeper. Spelsberg described one expert who contributed hundreds of gigabytes of scanned PDFs, spanning decades of marketing knowledge.

Despite this variance, Pinecone’s serverless architecture has allowed Delphi to scale beyond 100 million stored vectors across 12,000+ namespaces without hitting scaling cliffs.

Retrieval remains consistent, even during spikes triggered by live events or content drops. Delphi now sustains about 20 queries per second globally, supporting concurrent conversations across time zones with zero scaling incidents.

Toward a million digital minds

Delphi’s ambition is to host millions of Digital Minds, a goal that would require supporting at least five million namespaces in a single index.

For Spelsberg, that scale is not hypothetical but part of the product roadmap. “We’ve already moved from a seed-stage idea to a system managing 100 million vectors,” he said. “The reliability and performance we’ve seen gives us confidence to scale aggressively.”

Zhu agreed, noting that Pinecone’s architecture was specifically designed to handle bursty, multi-tenant workloads like Delphi’s. “Agentic applications like these can’t be built on infrastructure that cracks under scale,” he said.

Why RAG still matters and will for the foreseeable future

As context windows in large language models expand, some in the AI industry have suggested RAG may become obsolete.

Both Spelsberg and Zhu push back on that idea. “Even if we have billion-token context windows, RAG will still be important,” Spelsberg said. “You always want to surface the most relevant information. Otherwise you’re wasting money, increasing latency, and distracting the model.”

Zhu framed it in terms of context engineering — a term Pinecone has recently used in its own technical blog posts.

“LLMs are powerful reasoning tools, but they need constraints,” he explained. “Dumping in everything you have is inefficient and can lead to worse outcomes. Organizing and narrowing context isn’t just cheaper—it improves accuracy.”

As covered in Pinecone’s own writings on context engineering, retrieval helps manage the finite attention span of language models by curating the right mix of user queries, prior messages, documents, and memories to keep interactions coherent over time.

Without this, windows fill up, and models lose track of critical information. With it, applications can maintain relevance and reliability across long-running conversations.

From Black Mirror to enterprise-grade

When VentureBeat first profiled Delphi in 2023, the company was fresh off raising $2.7 million in seed funding and drawing attention for its ability to create convincing “clones” of historical figures and celebrities.

CEO Dara Ladjevardian traced the idea back to a personal attempt to reconnect with his late grandfather through AI.

Today, the framing has matured. Delphi emphasizes Digital Minds not as gimmicky clones or chatbots, but as tools for scaling knowledge, teaching, and expertise.

The company sees applications in professional development, coaching, and enterprise training — domains where accuracy, privacy, and responsiveness are paramount.

In that sense, the collaboration with Pinecone represents more than just a technical fit. It is part of Delphi’s effort to shift the narrative from novelty to infrastructure.

Digital Minds are now positioned as reliable, secure, and enterprise-ready — because they sit atop a retrieval system engineered for both speed and trust.

What’s next for Delphi and Pinecone?

Looking forward, Delphi plans to expand its feature set. One upcoming addition is “interview mode,” where a Digital Mind can ask questions of its own creator/source person to fill knowledge gaps.

That lowers the barrier to entry for people without extensive archives of content. Meanwhile, Pinecone continues to refine its platform, adding capabilities like adaptive indexing and memory-efficient filtering to support more sophisticated retrieval workflows.

For both companies, the trajectory points toward scale. Delphi envisions millions of Digital Minds active across domains and audiences. Pinecone sees its database as the retrieval layer for the next wave of agentic applications, where context engineering and retrieval remain essential.

“Reliability has given us the confidence to scale,” Spelsberg said. Zhu echoed the sentiment: “It’s not just about managing vectors. It’s about enabling entirely new classes of applications that need both speed and trust at scale.”

If Delphi continues to grow, millions of people will be interacting day in and day out with Digital Minds — living repositories of knowledge and personality, powered quietly under the hood by Pinecone.

The post How AI ‘digital minds’ startup Delphi stopped drowning in user data and scaled up with Pinecone appeared first on Venture Beat.

Share197Tweet123Share
Cracker Barrel unveils a new logo as part of wider rebrand efforts, sparking ire among some online
Business

Cracker Barrel unveils a new logo as part of wider rebrand efforts, sparking ire among some online

by Associated Press
August 21, 2025

NEW YORK (AP) — Cracker Barrel is marching forward with an ongoing makeover. And to the dismay of some fans, ...

Read more
News

16 essential Spike Lee movies everyone should see

August 21, 2025
Entertainment

Foxy Knoxy no more: How Monica Lewinsky and Amanda Knox teamed up to reclaim Knox’s narrative

August 21, 2025
Entertainment

Brent Hinds, cofounder of Grammy-winning metal band Mastodon, dies in motorcycle crash

August 21, 2025
News

The ‘Hypetrak Magazine’ Launch Party Is Coming to SILO in Brooklyn, and You’re Invited

August 21, 2025
California is too ‘overregulated, expensive, and risky’  for Bed Bath & Beyond, chairman says

California is too ‘overregulated, expensive, and risky’ for Bed Bath & Beyond, chairman says

August 21, 2025
Biden’s White House Spokesman Says They Spoke in Person Only Twice

Biden’s White House Spokesman Says They Spoke in Person Only Twice

August 21, 2025
Marlow: The Eighth Amendment Takes Down Letitia James’s Trump Case; Now She Should Be Disbarred

Marlow: The Eighth Amendment Takes Down Letitia James’s Trump Case; Now She Should Be Disbarred

August 21, 2025

Copyright © 2025.

No Result
View All Result
  • Home
  • News
    • U.S.
    • World
    • Politics
    • Opinion
    • Business
    • Crime
    • Education
    • Environment
    • Science
  • Entertainment
    • Culture
    • Gaming
    • Music
    • Movie
    • Sports
    • Television
    • Theater
  • Tech
    • Apps
    • Autos
    • Gear
    • Mobile
    • Startup
  • Lifestyle
    • Arts
    • Fashion
    • Food
    • Health
    • Travel

Copyright © 2025.