Dmitri Sirota, Chief Executive Officer of BigID, spoke with NYSE TV’s “Taking Stock with Kristen Scholer” for a special video interview.
Watch the interview above and check out the transcript below. The transcript of this conversation has been lightly edited for length and clarity.
KRISTEN SCHOLER (KS): Artificial intelligence remains in the spotlight. HumanX is currently underway in Las Vegas talking all things AI and companies continue to raise money in this space. Well, joining me now with more on how his company is operating in this current environment is Dmitri Sirota. He’s the CEO and co-founder of BigID. BigID recently debuted BigID Next. How is this platform helping companies protect their data?
DMITRI SIROTA (DS): Big ID started off as a company focused on helping organizations manage secure govern their data. As we’ve evolved and as AI has become prominent over the last two, three years, we’ve extended that to actually managing models, copilots, agents, and so BigID Next provides a vehicle for organizations to manage their data, the data that they use to train these AI models, but also to train the models to manage the models themselves to make sure they’re properly governed, properly controlled, and they don’t abuse or misuse the data that goes into train them.
KS: Safeguarding and protecting AI by protecting the data. How crucial is that?
DS: So we do a little bit of both. We actually protect the models and the interactions with the models, employees, consumers, but also the data. So 99% of companies are not gonna be building frontier or foundation models like OpenAI. What they’re gonna be doing is training existing models like OpenAI, like Claude or Anthropic on their data. So they’re gonna make the models smart on their data, but they want to be careful about how that data gets used and about how the models basically regurgitate that data back to employees and consumers. So we help on both accounts. We help them manage what data they feel comfortable using, make sure it’s properly cleansed, it’s compliant with all the regulations, it doesn’t violate any internal policies. And at the same time, we wanna manage the interactions with the models. We wanna make sure the models don’t go rogue, the models don’t send back information that is private or privileged. So BigID Next helps with both.
KS: How does this help companies stay ahead in an uber-competitive environment?
DS: It facilitates them being able to use AI safely. I think today we’re now kind of three years in since Open AI debuted ChatGPT. I think every company realizes and every board of director realizes their future hinges on the successful adoption of AI. There’s efficiency benefits, their scale benefits, and this is before we hit AIG. However, training those models, making use of those models, how they allow those models to be used by employees and by consumers and partners, that is still to be defined. So if we’re able to safeguard that, if we’re able to give them guardrails and parameters, then they could take advantage of AI more safely.
KS: Can you quantify the positive impact this can have?
DS: We can. I think we allow them to train and develop more models. We actually do give them an ability to assess the risk of an AI program that combines a model and the data that goes into them. We let them prioritize so they could focus on the ones that fall within a CT parameter of risk that they are willing to take. And ultimately if we’re able to help them deliver more, more, more models, more agents, more co-pilots and drive more value to their employees and to their customers, then we we see benefit.
KS: What more can you tell me about the key differentiators when it comes to BigID Next?
DS: Differentiators are a few things. So first of all, we have visibility into data. We could find it, we can identify it, we could contextualize it, meaning we could tell you if there’s other versions of that data where it exists. And BigID Next has the broadest coverage in terms of where companies keep data, whether it’s in SAP (SAP), whether it’s in Salesforce (CRM), whether it’s in Azure (MSFT), GCP (GOOGL) or AWS (AMZN) or even in in Legacy data center. On top of that, it has the ability to control. It doesn’t just tell you about the data. It gives you an ability to control the data. And with BigID Next, we don’t just limit you to the data, we now give you an ability to control the models and how the models interact, both with the data and the consumers that are using the models.
KS: What more can you share in terms of why companies do need to adapt this now?
DS: So look, AI is obviously gonna be decades long, the most salient kind of change, just like mobile was just like cloud was just like the internet commerce was maybe 15, 16 years ago. AI is gonna be the prevalent arc that we all need to kind of abide by. And I think companies that are sitting on the sidelines and can’t take advantage of it because of concerns around security, compliance, privacy, are gonna be at a disadvantage. And so we’re gonna help those organizations that are regulated or that are subject to some of the privacy regulations or that just want to have better experiences for their customers or their employees. We’re gonna help them take advantage of the latest technology, but do it in a responsible and safe way.
KS: What are your goals for 2025?
DS: So the goals are obviously to deal with any negative repercussions in the economy. But, more importantly, I think we’ve seen robust growth over the last three, four years despite softness in the economy. And now with AI becoming as prevalent as it is, becoming kind of the topic that every board and every CEO is conversing, to really drive that value. Some of the companies listed on the exchange are our customers and I think the more we could showcase around how AI could be used safely, responsibly, how it could be opened up to their employees and to their customers, I think it’s a win-win. It’s a win for us. It’s a win for our customers.
The post BigID Next is changing AI model training for the better, CEO says appeared first on Quartz.