DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

General Catalyst CEO Hemant Taneja on Aligning Profit With Purpose

November 23, 2025
in News
General Catalyst CEO Hemant Taneja on Aligning Profit With Purpose

Hemant Taneja,
CEO, General Catalyst

Hemant Taneja, who leads one of the world’s largest venture firms, believes doing good isn’t just the right thing to do. It’s good business.

At a moment of technologically driven upheaval, the General Catalyst CEO says leaders must bake positive social impact into a business’s soul from the start. Though Taneja and his wife are signatories of the Giving Pledge, he argues philanthropy is no longer enough. “In times like this, think deeply about your values because that’s going to be your guiding light in how you make difficult decisions,” he tells TIME.

[time-brightcove not-tgx=”true”]

For Taneja, that has meant orienting General Catalyst, which now oversees more than $40 billion in assets, toward the technology industry, opening a San Francisco office, and focusing heavily on applied artificial intelligence in healthcare. Yet it has also meant a major investment in the autonomous weapons company, Anduril, which he argues could promote global stability while lowering U.S. defense spending.

Taneja has a track record of crystal-ball gazing: he laid out his vision for the AI revolution in 2018, years before generative AI captured the global imagination. Now, in his new book, Transformative Principles, published in September, he makes the case for why investors seeking long-term returns must think beyond profit to positive impact.

He spoke with TIME about the book’s premise, his concept of “inclusive capitalism,” and why AI’s rapid development means we’re now at a crossroads.

This interview has been condensed and edited for clarity.
In Transformative Principles, you trace your mindset back to your childhood in India, learning about Hinduism and Vedic philosophy. How does this long-term, soul-centric worldview practically influence your day-to-day decisions as a CEO and investor, especially when faced with the short-term pressures of the market?

By human nature and as a society, we are designed to think short-term. I think the best decisions are made in the constructive confrontation between the challenges of the short-term and the desires of the long-term. Creating the patience to play the long game as the other side of the coin to being really intense and urgent in the short-term, I think creates a nice balance. It allows you to be moving fast. It allows you to move with intentionality, and that’s my core philosophy.

A lot of people often ask me what scares me, and I don’t operate with fear. That’s just not the way I like to make decisions. If you have confidence in where you’re headed in the long-term, and you’re trying to do the best in the short-term, it sort of calms you down, makes you even-keeled, which is what organizations need to be to take on long-term complex problems.

You advocate for “inclusive capitalism,” a model focused on “returns plus impact.” Could you explain how your vision of inclusive capitalism is distinct, and why you believe the traditional “profit-only capitalism has run its course and will now do more damage than good”?

If you think about the technological shifts and a lot of the anxiety in society today, it’s because while we created a lot of productivity with technology and globalization, we didn’t pass it on to all of society. It got captured by the few “haves” and there’s a lot of “have nots.”

If you look at what’s about to happen with AI, the chances are it’s going to get magnified even more. If you look at the pace of progress, it could only get worse. Meanwhile, if you really try to imagine what you could do with AI, you can actually drive abundance for everybody. So, how I think about inclusive capitalism is: how do we diffuse this technology in society so it creates prosperity for everybody around the world, as opposed to a handful of AI companies that capture all the value?

It’s different from just for-profit capitalism because you try to do what’s best for your business and align it with the interest of society. For-profit capitalism is very much “what do I need to do to maximize shareholder value?” only. Our belief is that the best companies end up being the ones that are most aligned with society, and that’s what gives you the right to grow for a long time and therefore be good investments.

A recent MIT report found a high failure rate for enterprise AI pilots—they often don’t deliver ROI or get adopted. Your strategy at General Catalyst, however, has been to focus specifically on injecting AI into some of the most complex, human-centric industries, like healthcare. What are the lessons from some of the projects you’ve been involved in?

First of all, diffusion of AI really requires four dimensions to it from our perspective: 1) Making sure the enterprise that’s trying to leverage AI has data infrastructure readiness so you can use it; 2) Adapting these models to your secret sauce, your data; 3) Thinking about how’s work going to get done. What are agents going to do? And what is that change management of the organization? 4) This kind of a diffusion can’t really happen across all three of these dimensions unless you have leadership at the top that has the courage to drive it.

There’s a lot of talk right now about whether AI can ever make returns on the massive investments flowing in. Do you think we’re in an AI bubble?

First of all, bubbles are good. They organize capital and talent into a space that’s interesting. Good outcomes do come out of that, even though you also see carnage. I would say the amount of investment going into it will not make any sense if you try to recoup that investment in the context of cloud and software infrastructure. But if you think about the fact that this is really going after the labor, the workforce spend, then you start to see that opportunity is so large. What these AI foundation model companies are trying to do is create capabilities that would extract economic rent from budgets that were otherwise going towards traditional labor.

A lot of people hearing that might think that means more money going to the AI companies and less going to workers, which speaks to some of the concentration of power dynamic you mentioned. But in your book, you write about the opportunity for AI being used to upskill workers who are being displaced in this transformation. Can you speak to that?

This is going to happen because the economics are just going to be too good, and the performance of the AI agents is going to be too good for many of the types of work that gets done in enterprises. So the question is, how do you leverage that to make the workforce be that much more effective and strategic so your business transforms?

The way we see the jobs transition is in the short term, we actually need a lot of jobs to train people on how to effectively become “superhumans,” using AI.

In the medium term, as these businesses become more and more AI-native, you are able to drive abundance in core industries. In the example of education, if you can imagine we give a tutor to everybody on the planet and they reskill through life with a relationship with that tutor, there’s a whole set of jobs going to get created around an economy where the education system is transforming that way. Same for healthcare. If you had an agentic workforce taking care of people on a phone remotely, which is highly effective and affordable, I think that drives abundance in healthcare. That’s going to create a whole new set of jobs and a new care model.

In the long term, I think we have a real question, which is: as these technologies, both software and robotics, become really, really strong, how will work have to change and how will our priorities as a society have to change? Where AI is a deflationary technology, it does provide us with a great life. I think those are the long-term philosophical questions we need time to really think through and embrace and evolve around as a society.

You write that companies must have a “soul.” OpenAI was founded as a nonprofit organization with a mission to benefit all of humanity, but later created a for-profit arm and has faced criticism for that. From your perspective, what lessons can be learned from OpenAI’s journey? How can you make sure that a company with a strong initial commitment to values can withstand strong commercial pressures?

I think this is the constructive confrontation between the short-term opportunity and long-term goals. Getting those decisions right is ultimately what creates an enduring company that’s in the interest of society. I think OpenAI has done a lot of great things. AI hit the zeitgeist because of ChatGPT. I think they often talk about doing the right things, but the commercial pressures are great. AI is also a very geopolitically relevant topic because every region is trying to create leadership, and America correctly wants to be a leader in AI.

My hope is that the leadership there does think about these things, and is making the right trade-offs. Also learning, as products get brought to market, where they might have unintended consequences. For example, think about the relationship that ChatGPT can have with individuals. Making sure it’s safe and is guiding people the right way. I’m certainly hoping that the team there does think about these issues and is sticking to those values that they had in the beginning.

Do you think responsibility should fall to AI industry leadership alone, or is there a role for outside accountability from policymakers and regulators?

I think these have to be private partnerships. So there is a role for that. When you think about these technologies, they have to uphold the spirit of regulations. Wrong choices of business models in healthcare or advertising, while they created big companies in the short term, caused a lot of issues as well. Healthcare became unaffordable, advertising led to polarization.
So I think getting those choices right as businesses, that’s the mindset and mechanisms I talk about for companies. Then making sure the government is providing the right oversight that doesn’t slow down innovation but brings responsibility to the application of AI. Regulations on the application side in some ways already exist.

By that, do you mean you want regulation on application rather than on development?

Honestly, the biggest lever to make a difference there is to just have our founders think about those issues in the way they’re building those companies. The more we do our part in thinking through those responsibilities around the application of technology, the less regulation we’ll need. Self-regulation is better than regulation—I genuinely believe that—if we can, as an entrepreneurial ecosystem, just embrace that mindset.

Healthcare is a major focus for General Catalyst. What is it about that industry that you think makes it an ideal test case for capitalism reinventing itself?

Healthcare is a defining issue. Certainly in America and in most parts of the world, it is becoming unaffordable. We just went through a pandemic. So we all realize how existential it can be if there was another pandemic. Are we really ready to be able to have that resiliency? If you think about what happened in the last few years, healthcare also has the biggest propensity to adopt technology for two reasons: firstly AI hit the zeitgeist and became viable, and secondly the pandemic eviscerated the systems, and the workforce is burnt out. The workforce isn’t fighting it saying “we don’t want to use this technology” because they don’t know what it means for them. The workforce in healthcare is embracing it because they need the leverage, they need the help.

So if we intentionally and carefully bring technology in to drive abundance to empower the workforce in healthcare, and then also reskill people in health systems, I think we’ll be able to demonstrate what the adoption of AI and abundance can really mean, how it can benefit society. We just closed on the acquisition of a health system in Akron, Ohio. And we’re going to try to create that blueprint ourselves with our founders.

Looking ahead, is there one piece of advice that you’d have for founders, for leaders, and for investors reading your book that they can take away?

My most important advice is this is peak ambiguity. We don’t really know the capabilities of this technology as it develops, and geopolitics is changing business in a material way. In times like this, think deeply about your values because that’s going to be your guiding light in how you make difficult decisions. That missionary approach of building companies is incredibly important in these times.

The post General Catalyst CEO Hemant Taneja on Aligning Profit With Purpose appeared first on TIME.

8 travel destinations you may want to avoid in 2026 due to overtourism, according to Fodor’s travel guide
News

8 travel destinations you may want to avoid in 2026 due to overtourism, according to Fodor’s travel guide

November 23, 2025

Locals have complained about overtourism in the Montmartre neighborhood of Paris. Lewis Joly/APDestinations around the world are feeling the pinch ...

Read more
News

‘Anguished’ troops say Trump just put them in an ‘impossible situation’: ex-Republican vet

November 23, 2025
News

College Students Furious When Their Course Is Taught by AI Instead of a Professor

November 23, 2025
News

We live 16 hours from my in-laws. My kids have close relationships with their grandparents, despite rarely seeing them.

November 23, 2025
News

3 Reasons Daters Are Settling for AI Companions Over Human Partners

November 23, 2025
These ducks are local celebrities and walk the red carpet daily

These ducks are local celebrities and walk the red carpet daily

November 23, 2025
‘Entitled,’ ‘complacent,’ and ‘sloppy’: Inside the workplace tension at the world’s largest HR organization

‘Entitled,’ ‘complacent,’ and ‘sloppy’: Inside the workplace tension at the world’s largest HR organization

November 23, 2025
Dinosaur Puke Reveals New Flying Reptile Species

Dinosaur Puke Reveals New Flying Reptile Species

November 23, 2025

DNYUZ © 2025

No Result
View All Result

DNYUZ © 2025