Under pressure from Congress, Facebook revealed in 2018 that it provided access to key application programming interfaces (APIs) to device-makers based in the People’s Republic of China (PRC), including Huawei, OPPO, TCL, and others. In the wake of those disclosures, Facebook met with the staffs of U.S. Sens. Marco Rubio, R-Fla., and Mark Warner, D-Va., and the Senate Intelligence Committee, to discuss access to this data and what controls Facebook was putting in place to protect user data in the future.
Warner leads the committee and Rubio is the vice chairman of it.
The senators sent a letter to Meta CEO Mark Zuckerberg, questioning the company about recently released documents revealing that Facebook knew that hundreds of thousands of developers in what it classified as “high-risk jurisdictions” including the PRC and Russia, had access to user data. The documents were released as part of ongoing litigation against the company related to its lax handling of personal data after revelations regarding Cambridge Analytica. The newly available documents reveal that Facebook internally acknowledged in 2018 that this access could be used for espionage purposes.
The letter is below.
Dear Mr. Zuckerberg:
We write you with regard to recently unsealed documents in connection with pending litigation your company, Meta, is engaged in. It appears from these documents that Facebook has known, since at least September 2018, that hundreds of thousands of developers in countries Facebook characterized as “high-risk,” including the People’s Republic of China (PRC), had access to significant amounts of sensitive user data. As leaders of the Senate Intelligence Committee, we write today with a number of questions regarding these documents and the extent to which developers in these countries were granted access to American user data.
In 2018, the New York Times revealed that Facebook had provided privileged access to key application programming interfaces (APIs) to Huawei, OPPO, TCL, and other devicemakers based in the PRC. Under the terms of agreements with Facebook dating back to at least 2010, these device manufacturers were permitted to access a wealth of information on Facebook’s users, including profile data, user IDs, photos, as well as contact information and even private messages. In the wake of these revelations, as well as broader revelations concerning Facebook’s lax data security policies related to third-party applications, our staffs held numerous meetings with representatives from your company, including with senior executives, to discuss who had access to this data and what controls Facebook was putting in place to protect user data in the future.
Given those discussions, we were startled to learn recently, as a result of this ongoing litigation and discovery, that Facebook had concluded that a much wider range of foreign based developers, in addition to the PRC-based device-makers, also had access to this data. According to at least one internal document, this included nearly 90,000 separate developers in the People’s Republic of China (PRC), which is especially remarkable given that Facebook has never been permitted to operate in the PRC. The document also refers to discovery of more than 42,000 developers in Russia, and thousands of developers in other “high-risk jurisdictions,” including Iran and North Korea, that had access to this user information.
As Facebook’s own internal materials note, those jurisdictions “may be governed by potentially risky data storage and disclosure rules or be more likely to house malicious actors,” including “states known to collect data for intelligence targeting and cyber espionage.” As the Chairman and Vice Chairman of the Senate Select Committee on Intelligence, we have grave concerns about the extent to which this access could have enabled foreign intelligence service activity, ranging from foreign malign influence to targeting and counter-intelligence activity.
In light of these revelations, we request answers to the following questions on the findings of Facebook’s internal investigation:
1) The unsealed document notes that Facebook conducted separate reviews on developers based in the PRC and Russia “given the risk associated with those countries.”
What additional reviews were conducted on these developers?
When was this additional review completed and what were the primary conclusions?
What percentage of the developers located in the PRC and Russia was Facebook able to definitively identify?
What communications, if any, has Facebook had with these developers since its initial identification?
What criteria does Facebook use to evaluate the “risk associated with” operation in the PRC and Russia?
2) For the developers identified as being located within the PRC and Russia, please provide a full list of the types of information to which these developers had access, as well as the timeframes associated with such access.
3) Does Facebook have comprehensive logs on the frequency with which developers from high-risk jurisdictions accessed its APIs and the forms of data accessed? 4) Please provide an estimate of the number of discrete Facebook users in the United States whose data was shared with a developer located in each country identified as a “high-risk jurisdiction” (broken out by country).
4.) Please provide an estimate of the number of discrete Facebook users in the United State whose date was shared with a developer located in the each country identified as a “high-risk jurisdiction” (broken out by country).
5) The internal document indicates that Facebook would establish a framework to identify the “developers and apps determined to be most potentially risky[.]” How did Facebook establish this rubric? How many developers and apps based in the PRC and Russia met this threshold? How many developers and apps in other high-risk jurisdictions met this threshold? What were the specific characteristics of these developers that gave rise to this determination? Did Facebook identify any developers as too risky to safely operate with? If so, which?
6) The internal document references your public commitment to “conduct a full audit of any app with suspicious activity.” How does Facebook characterize “suspicious activity” and how many apps triggered this full audit process?
7) Does Facebook have any indication that any developers’ access enabled coordinated inauthentic activity, targeting activity, or any other malign behavior by foreign governments?
8) Does Facebook have any indication that developers’ access enabled malicious advertising or other fraudulent activity by foreign actors, as revealed in public reporting? Thank you for your prompt attention.