Oracle, Google, Microsoft, and Amazon are archenemies in the competitive cloud computing market. But in late 2018, top executives from the four companies, including future Amazon CEO Andy Jassy, teamed up on an unpaid side gig: advising the president and US Congress on how artificial intelligence can bolster national security.
The executives were named to the National Security Commission on AI, created by Congress. Its chair was Eric Schmidt, previously CEO of Google, who later said it would help the US “harness this transformative technology to benefit both our economic and national security interests.”
Schmidt, Jassy, and the other commission members from Big Tech also had an economic interest in the topic. Their companies compete for Pentagon contracts, like the $10 billion JEDI project that is now being reworked after a lawsuit from Amazon. Schmidt sat on the board of Google parent Alphabet until 2019 and has since invested in Pentagon contractor Rebellion Defense.
The NSCAI completed its three-year mission and shut down on October 1. But fans of the body say—and critics fear—its legacy will live on. Both point to how the group’s recommendations, some of which steer the Pentagon to work more closely with the tech industry, have already been written into law. The US has few laws specifically concerned with AI, and the commission shaped a significant chunk of those on the books.
NSCAI says 19 of its recommendations to Congress were included in the defense budget approved in December 2020. One directs the Pentagon to use an existing industry exchange program to bring in more AI talent from tech companies. Another promoted the director of the Pentagon’s Joint AI Center, which aims to expand military use of AI by tapping commercial AI providers, including Google, to report directly to the deputy secretary of defense.
Other recommendations from the group include having the Pentagon create an internal platform for current and future AI projects that draws on “computing and storage services from a pool of vetted cloud companies.” Another calls for a department-wide push to use “commercial AI solutions” to automate its many administrative processes.
The commission also suggested new investment in training AI experts, research inside and outside the Pentagon, and support for US semiconductor development. The group’s overall message? Artificial intelligence is central to the country’s destiny and safety—and to competing with China’s plans to dominate in commercial and military AI.
“The commission put forward a very concrete, strategic plan for US technology policy,” says Martijn Rasser, director of the technology and national security program at think tank the Center for a New American Security. “It’s fantastic to see so many of the ideas put forward making it into legislation.”
Meredith Whittaker, faculty director of AI Now Institute at New York University, has a different view of the commission’s work. “What I saw was an extraordinarily conflicted quasi-government body writing policy and legislation from the sidelines that potentially pulls in hundreds of millions of dollars for big tech,” she says. Whittaker previously worked at Google, where she helped organize protests against a Pentagon project that used the company’s technology to analyze drone surveillance footage.
The NSCAI was created by a 2018 law, with a charter that provided an estimated annual budget of $5 million and staff of 26 people. Members of Congress nominated 12 commissioners, the Pentagon two, and the Department of Commerce one.
Beyond Schmidt and Jassy, members included Andrew Moore, Google’s head of cloud AI; Safra Catz, co-CEO of Oracle; Eric Horvitz, Microsoft’s director of research; Robert Work, a former deputy secretary of defense who helped start the Pentagon’s recent pivot toward AI, and former Democratic FCC commissioner Mignon Clyburn.
The panel started work in 2019 and issued a series of interim reports and recommendations before delivering its final 756-page opus in March. It came with predrafted legislation so lawmakers could copy and paste the group’s ideas into law and draft executive orders for the White House.
Commissioners also appeared at congressional hearings, including one dedicated to the group’s recommendations. At a February hearing of the House Armed Services committee, Schmidt warned that “the threat of Chinese leadership in key technology areas is a national crisis and needs to be dealt with directly, now.”
Ylli Bajraktari, who served as NSCAI’s executive director, says Congress’s action on the commission’s recommendations indicates the group did its job. “I think leaders in Congress understand we’re lacking in this important technology that’s going to dominate our lives,” he says. “We enjoyed bipartisan support.”
Asked if the group was too tech industry-centric, Bajraktari points out that most of the 15 commissioners were not from the tech industry and were appointed by lawmakers and government agencies. The group consulted “hundreds of private sector companies and academics, as well as international allies and partners” before drawing up recommendations, he says.
When WIRED asked technology companies if their involvement in the commission created conflicts of interest, their responses largely ignored the question. Oracle did not respond to a request for comment.
Moore, Google’s head of Cloud AI, said he was honored to serve on the commission and that he hoped it and other projects would “strengthen American AI leadership and grow a more robust AI workforce.” Amazon referred WIRED to Jassy’s comments at a March public meeting of the group, where he talked about the need for “meaningful urgency” on the issues it had highlighted. Microsoft’s Horvitz said he had led the commission’s work on “Trustworthy and Ethical AI” and said in a statement that he “found all of the commissioners, no matter their affiliation, to be deeply committed to the mission: the national security and prosperity of the United States.” A spokesperson for Schmidt said he had been appointed to the commission because of his technology expertise and had filed the required ethics paperwork, which was reviewed by Pentagon lawyers.
The commission’s final report argues that infusing AI systems with “American values” is part of the global competition over the technology. “The more our commissioners thought about it, the more it became clear that the one thing that makes us different from China is how we use these technologies,” Bajraktari says.
Some of the recommendations are under consideration by Congress for inclusion in the next defense budget. One would require national security agencies and armed services branches to have a member of senior leadership working full time on “responsible AI.” Another would require formal assessments of risks to privacy and civil liberties for any AI system involving US persons.
Ben Winters, a lawyer who works on AI issues at the Electronic Privacy Information Center, supports some of those suggestions, but he says that overall the commission’s recommendations lean heavily toward deploying, rather than constraining, AI.
The result resembles some AI ethics suggestions from the tech industry, he says, which lack sufficient bite to meet the scale of the challenges posed by the technology. “The tenor of the recommendations largely is ‘We need to keep pushing on AI adoption so we don’t lose to China,’” Winters says. “They failed to recommend comprehensive privacy legislation or any concrete rights by people impacted by harmful AI.” EPIC won a lawsuit against the commission that forced the disclosure of many documents, including commissioners’ ethics forms, but details of the disclosures were redacted.
Four days after NSCAI expired, Schmidt announced a new, private organization called the Special Competitive Studies Project that bears similarities to the commission but does not have formal government backing. Bajraktari is CEO. Work, the former deputy secretary of defense, is on the board.
The new project is inspired by the Special Studies Project set up in 1956 by Nelson Rockefeller and led by Henry Kissinger to suggest ideas for US national priorities after World War II. That group’s report after the 1957 launch of Sputnik—recommending an urgent military and nuclear buildup—is credited with shaping US strategy during the Cold War.
Schmidt’s group discussed its own focus at its first advisory board meeting this week. The group said it will create panels to research the impact of AI and other emerging technologies in areas that include defense, intelligence, economy, and society. In a statement, Schmidt said the group “fills an important gap in the national discourse on these important issues.” He added, “We must get this right to lead in the global technology competition.”
- 📩 The latest on tech, science, and more: Get our newsletters!
- Weighing Big Tech’s promise to Black America
- I used Facebook without the algorithm, and you can too
- How to install Android 12—and get these great features
- Games can show us how to govern the metaverse
- If clouds are made of water, how do they stay in the air?
- 👁️ Explore AI like never before with our new database
- 🎮 WIRED Games: Get the latest tips, reviews, and more
- 💻 Upgrade your work game with our Gear team’s favorite laptops, keyboards, typing alternatives, and noise-canceling headphones