Commerce Department and VC firms partner to encourage “responsible” AI use for startups

General Catalyst managing director Hemant Taneja
General Catalyst managing director Hemant Taneja.
Noam Galai

As debate over regulating artificial intelligence roils Congress, the U.S. government will enter into a voluntary agreement with leading venture capital firms to encourage early-stage startups to develop AI responsibly.

“We need to have a framework of accountability, explainability, transparency,” Hemant Taneja, managing director of venture capital firm General Catalyst and founder of the think tank leading the initiative, said at Fortune’s CEO Initiative Tuesday.

The plan, known as Responsible AI Initiative, is a joint effort between the Commerce Department and the Responsible Innovation Lab (RIL), a consortium of tech industry players, including investors, that advocates for safe innovation practices. Investors who sign on will commit to funding early-stage AI companies—which have become extraordinarily popular in Silicon Valley as of late—under a set of voluntary safeguards meant to ensure responsible uses of the new technology. Signatories would agree to disclosing safety evaluations, submitting to regular audits of their AI platforms, and committing to actually implementing any improvements. 

There have been several other efforts by the private sector to impose some form of self regulation on AI; however, those have been limited to existing Big Tech firms, rather than startups. 

“Our coalition has developed a framework that promotes responsible innovation, growth for early-stage companies leveraging AI, and ongoing collaboration between the public and private sectors,” RIL executive director Gaurab Bansal told Fortune before the conference.

 “We’re encouraged to see venture capitalists, startups, and business leaders rallying around this and similar efforts,” a Commerce Department spokesperson told Fortune. The department previously confirmed that it “provided feedback to RIL” about its “priorities.” 

The focus on startups was deliberate so as to ensure that AI regulations don’t get written in favor of tech’s biggest players, a person close to matter told Fortune. At the CEO Initiative, Taneja said AI would be an innovation that created competition between incumbents and startups in a way previous technological advancements hadn’t. Startups often lack the resources to be able to think of the long-term implications of the technology they develop and so may struggle to implement responsible AI practices without clear guidelines. “These big companies have responsible AI teams, they have trust and safety teams. People have PhDs in this, and early-stage companies don’t have that,” Responsible Innovation Labs senior advisor Lauren Wagner said.  

Wagner and Bansal also offered a more pragmatic view of AI regulation: Investors and prospective buyers won’t want to give their money to a startup that can’t guarantee responsible uses of AI. “Startups seek to acquire customers, so they need folks to trust the systems that they’re creating,” Wagner said. 

The RIL was founded by General Catalyst’s Taneja and two former executives from the payments company Stripe, which General Catalyst invested in. So far, other signatories on the new initiative include Mayfield, Bain Capital Ventures, Institutional Venture Partners, and Lux Capital, with others that requested to not be named, an RIL spokesperson confirmed. 

Taneja has a reputation as an outspoken figure in the Silicon Valley community. When the Bay Area was reeling from the collapse of Silicon Valley Bank, he spearheaded an open letter supporting the bank (so long as it was acquired by another entity), which ended up getting signatures from 120 companies; and collaborated with other VC firms on a set of recommendations for startups to mitigate against future bank runs. And last month, he jumped into the public debate on AI with a Harvard Business Review article written alongside CNN’s Fareed Zakaria, urging the U.S. to outpace China’s technological advancement.

AI is “heading toward two hermetically sealed ecosystems: one that supports open systems but is also associated with democracy, privacy, and individual rights, versus one that supports state control, information-flow restriction, and politically imposed limits on openness,” Taneja and Zakaria wrote.  

Now, Taneja and General Catalyst are turning their attention to AI regulation. The tech industry has already taken some strides toward self-regulation although most of it has been limited to the biggest companies, like Alphabet and Microsoft. In July, those companies, along with Anthropic and OpenAI, two of the most promising startups in the field, formed the first AI lobbying group, called the Frontier Model Forum. Tuesday’s effort would go a step further by immediately soliciting input from a government agency. 

Tech companies say they’re open to AI regulation

Washington’s desire to regulate AI early in its development is a partial reversal of a lax regulatory policy toward big tech firms, in particular social media companies. The lack of early scrutiny of tech giants including Facebook and Instagram has resulted in a raft of unintended side effects, such as a loss of privacy, the spread of misinformation, and allegations that foreign actors use the technology to interfere in U.S. elections.

Leaders of AI companies have themselves called for regulation: Testifying before Congress this spring, OpenAI CEO Sam Altman called for the government to regulate AI, a sentiment Alphabet CEO Sundar Pichai echoed in an op-ed in the Financial Times, writing that AI was “too important not to regulate, and too important not to regulate well.”