Exploring the world of Artificial Intelligence (A.I.) is a familiar journey for Eric Schmidt, the former CEO of Google. Over the years, he has made significant investments in a variety of A.I. startups, including Stability AI, Inflection AI, and Mistral AI. Now, however, Schmidt is shifting gears by launching a $10 million initiative aimed at tackling the safety issues that come with this revolutionary technology.
The funding will create an A.I. safety science program under Schmidt Sciences, a nonprofit organization he co-founded with his wife, Wendy. This initiative, which will be spearheaded by Michael Belinsky, intends to emphasize the scientific study of A.I. safety, moving beyond merely assessing the risks. “Our goal is to conduct academic research that explains why certain elements may be inherently unsafe,” Belinsky shared.
As part of this initiative, over two dozen researchers have already been selected to receive grants of up to $500,000 each. Beyond financial assistance, these researchers will gain access to critical computational resources and A.I. models. The program is designed to adapt alongside the quick pace of industry advancements. “We’re focused on addressing the challenges posed by contemporary A.I. systems, rather than outdated ones like GPT-2,” Belinsky pointed out.
Among the initial recipients of these grants are esteemed researchers such as Yoshua Bengio and Zico Kolter. Bengio’s work will center on developing technologies to mitigate risks in A.I. systems, while Kolter will investigate issues like adversarial transfer. Another grantee, Daniel Kang, plans to research the potential for A.I. agents to carry out cybersecurity attacks, shedding light on the inherent risks tied to A.I. capabilities.
Even amid the buzz surrounding A.I. in Silicon Valley, there are rising concerns that safety issues may be overlooked. The new program from Schmidt Sciences aims to bridge this gap by eliminating obstacles that prevent A.I. safety research. By encouraging collaboration between academia and industry, researchers like Kang hope that leading A.I. companies will adopt findings from safety research into their technology development strategies.
As the A.I. landscape evolves, Kang highlights the crucial need for open dialogue and clear reporting in the testing of A.I. models. He calls for responsible practices from major laboratories to ensure the ethical and safe advancement of A.I. technology.
In summary, Eric Schmidt’s $10 million investment in A.I. safety reflects a commitment to prioritizing research and innovation to address the pressing challenges and risks associated with this transformative field.