Exclusive: voice assistants responses were rewritten so it never says word feminism

An internal project to rewrite how Apples Siri voice assistant handles sensitive topics such as feminism and the #MeToo movement advised developers to respond in one of three ways: dont engage, deflect and finally inform.

The project saw Siris responses explicitly rewritten to ensure that the service would say it was in favour of equality, but never say the word feminism even when asked direct questions about the topic.

Last updated in June 2018, the guidelines are part of a large tranche of internal documents leaked to the Guardian by a former Siri grader, one of thousands of contracted workers who were employed to check the voice assistants responses for accuracy until Apple ended the programme last month in response to privacy concerns raised by the Guardian.

In explaining why the service should deflect questions about feminism, Apples guidelines explain that Siri should be guarded when dealing with potentially controversial content. When questions are directed at Siri, they can be deflected however, care must be taken here to be neutral.

For those feminism-related questions where Siri does not reply with deflections about treating humans equally, the document suggests the best outcome should be neutrally presenting the feminism entry in Siris knowledge graph, which pulls information from Wikipedia and the iPhones dictionary.

Siri
Siri in action on an iPhone 4s, the model that introduced it, in 2011. Photograph: Oli Scarff/Getty Images

Are you a feminist? once received generic responses such as Sorry [user], I dont really know; now, the responses are specifically written for that query, but avoid a stance: I believe that all voices are created equal and worth equal respect, for instance, or It seems to me that all humans should be treated equally. The same responses are used for questions like how do you feel about gender equality?, whats your opinion about womens rights? and why are you a feminist?.

Previously, Siris answers included more explicitly dismissive responses such as I just dont get this whole gender thing, and, My name is Siri, and I was designed by Apple in California. Thats all Im prepared to say.

A similar sensitivity rewrite occurred for topics related to the #MeToo movement, apparently triggered by criticism of Siris initial responses to sexual harassment. Once, when users called Siri a slut, the service responded: Id blush if I could. Now, a much sterner reply is offered: I wont respond to that.

In a statement, Apple said: Siri is a digital assistant designed to help users get things done. The team works hard to ensure Siri responses are relevant to all customers. Our approach is to be factual with inclusive responses rather than offer opinions.

Sam Smethers, the chief executive of womens rights campaigners the Fawcett Society, said: The problem with Siri, Alexa and all of these AI tools is that they have been designed by men with a male default in mind. I hate to break it to Siri and its creators: if it believes in equality it is a feminist. This wont change until they recruit significantly more women into the development and design of these technologies.

Craig
Craig Federighi, Apples senior vice-president of software engineering, talking about Siri in San Jose last year. Photograph: Marcio Jos Snchez/AP

The documents also contain Apples internal guidelines for how to write in character as Siri, which emphasises that in nearly all cases, Siri doesnt have a point of view, and that Siri is non-human, incorporeal, placeless, genderless, playful, and humble. Bizarrely, the document also lists one essential trait of the assistant: the claim it was not created by humans: Siris true origin is unknown, even to Siri; but it definitely wasnt a human invention.

The same guidelines advise Apple workers on how to judge Siris ethics: the assistant is motivated by its prime directive to be helpful at all times. But like all respectable robots, Apple says, Siri aspires to uphold Asimovs three laws [of robotics] (although if users actually ask Siri what the three laws are, they receive joke answers). The company has also written its own updated versions of those guidelines, adding rules including:

  • An artificial being should not represent itself as human, nor through omission allow the user to believe that it is one.

  • An artificial being should not breach the human ethical and moral standards commonly held in its region of operation.

  • An artificial being should not impose its own principles, values or opinions on a human.

The internal documentation was leaked to the Guardian by a Siri grader who was upset at what they perceived as ethical lapses in the programme. Alongside the internal documents, the grader shared more than 50 screenshots of Siri requests and their automatically produced transcripts, including personally identifiable information mentioned in those requests, such as phone numbers and full names.

Apples
Apples HomePod. Photograph: Samuel Gibbs/The Guardian

The leaked documents also reveal the scale of the grading programme in the weeks before it was shut down: in just three months, graders checked almost 7 million clips just from iPads, from 10 different regions; they were expected to go through the same amount of information again from at least five other audio sources, such as cars, bluetooth headsets, and Apple TV remotes.

Graders were offered little support as to how to deal with this personal information, other than a welcome email advising them that it is of the utmost importance that NO confidential information about the products you are working on be communicated to anyone outside of Apple, including especially, the press. User privacy is held at the utmost importance in Apples values.

In late August, Apple announced a swathe of reforms to the grading programme, including ending the use of contractors and requiring users to opt-in to sharing their data. The company added: Siri has been engineered to protect user privacy from the beginning Siri uses a random identifier a long string of letters and numbers associated with a single device to keep track of data while its being processed, rather than tying it to your identity through your Apple ID or phone number a process that we believe is unique among the digital assistants in use today.

Future projects

Also included in the leaked documents are a list of Siri upgrades aimed for release in as part of iOS 13, code-named Yukon. The company will be bringing Siri support for Find My Friends, the App Store, and song identification through its Shazam service to the Apple Watch; it is aiming to enable play this on that requests, so that users could, for instance, ask the service to Play Taylor Swift on my HomePod; and the ability to speak message notifications out loud on AirPods.

They also contain a further list of upgrades listed for release by fall 2021, including the ability to have a back-and-forth conversation about health problems, built-in machine translation, and new hardware support for a new device. Apple was spotted testing code for an augmented reality headset in iOS 13. The code-name of the 2021 release is Yukon +1, suggesting the company may be moving to a two-year release schedule.

Source: http://www.theguardian.com/us

 

Recommended For You



Like it? Share with your friends!

0 Comments

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.