large image

Trusted Reviews is supported by its audience. If you purchase through links on our site, we may earn a commission. Learn more.

Google promises its AI won’t be used for a Terminator (or any other weapons)

Sundar Pichai has outlined Google’s AI development principles amid increasing scrutiny of the company’s work in the sector. In a post on the Google blog, the firm’s CEO revealed a detailed list of dos and don’ts, which will guide its development of AI applications moving forward.

Pichar outlined seven principles described as ‘concrete standards’ that will ‘actively govern’ Google’s work in AI R&D, but also avenues the company will not pursue. He also promised Google’s AI will be used to create weapons or technologies likely to cause overall harm. Pichai also pledges AI work at Google will not be used for undue surveillance or to contravene international law or human rights.

The mission statement comes after heavy criticism over Google’s participation in a clandestine AI program for the Pentagon in the United States, which caused the resignation of multiple employees. Google will not renew its participation in the Project Maven initiative, which looks to use machine learning to improve the accuracy of drone strikes, among other things.

Moving forward, the seven principles are as follows:

1. Be socially beneficial – Google says AI has the potential to benefit healthcare, security, energy, transportation, manufacturing, and entertainment industries.

2. Avoid creating or reinforcing unfair bias – “We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief,” Pichai writes.

3. Be built and tested for safety – Google says it is designing AI to be “appropriately cautious”, while it will test AI tech in contained environments.

4. Be accountable to people – Pichai says all “AI technologies will be subject to appropriate human direction and control.”

5. Incorporate privacy design principles – “We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data,” the Google boss says.

6. Uphold high standards of scientific excellence – Google pledges to “promote thoughtful leadership” in this arena as it seeks to unlock new realms of scientific research via AI.

7. Be made available for uses that accord with these principles – “We will work to limit potentially harmful or abusive applications,” Google claims.

“How AI is developed and used will have a significant impact on society for many years to come,” the Google boss writes. “As a leader in AI, we feel a deep responsibility to get this right.”

Do you trust Google to ‘get this right’? Drop us a line @TrustedReviews on Twitter.

Why trust our journalism?

Founded in 2004, Trusted Reviews exists to give our readers thorough, unbiased and independent advice on what to buy.

Today, we have millions of users a month from around the world, and assess more than 1,000 products a year.

author icon

Editorial independence

Editorial independence means being able to give an unbiased verdict about a product or company, with the avoidance of conflicts of interest. To ensure this is possible, every member of the editorial staff follows a clear code of conduct.

author icon

Professional conduct

We also expect our journalists to follow clear ethical standards in their work. Our staff members must strive for honesty and accuracy in everything they do. We follow the IPSO Editors’ code of practice to underpin these standards.