| New Delhi |
Published: January 21, 2020 5:21:47 pm
Sundar Pichai’s support for the European Union’s temporary ban on the use of facial recognition has once again highlighted how the tech sector in general and the tech giants in particular have different views on how this new, but controversial, technology should be used.
At a conference in Brussels, the Alphabet CEO spoke of a possibility that facial recognition could be used for nefarious purposes. “I think it is important that governments and regulations tackle it sooner rather than later and give a framework for it,” said Pichai, who was leading Google till recently. “It can be immediate, but maybe there’s a waiting period before we really think about how it’s being used,” he added.
Pichai’s statement adds to the growing debate around the use of facial recognition, especially by law enforcement agencies across the world. And he is not the only one demanding more regulation.
What is the apprehension around facial recognition tech?
Researchers have pointed out that there are concerns over the accuracy of these systems, especially when it comes to detecting different genders, people with darker skin tones and racial features. Facial recognition relies on artificial intelligence and machine learning to match a user’s face in a given database. But the accuracy of these systems will be crucial going forward, given their use case will be more widespread across the world including in India, and countries like UK and US.
There are privacy issues as well regarding the use of such technologies, and concerns around governments using this technology to carry out mass surveillance, especially against minority groups. In China for instance, the New York Times reported that the government was using sophisticated facial recognition tools to track and carry out mass surveillance on Uighurs, the Muslim minority in the country.
The issue of how consent is obtained for such facial recognition tools, especially when there is no clear regulation in most parts of the world, cannot be ignored either. When such technology is used in the public domain, especially to collect and build databases, questions around how user consent was obtained often remains unanswered.
For example, in South Wales, UK police used facial recognition to monitor football fans arriving for a match between Cardiff v Swansea on January 12, 2020, which caused considerable anger as fans alleged this was a violation of their rights.
In India, the anti-CAA protests have shown several instances of police recording footage of protesters across various parts of the country, raising questions on how this data is being collected, stored and used. For example in Chennai, a video went viral of a policeman recording footage of women near a bus stop on a mobile phone.
The incident raised fears that the police were using the Facetagr app, which is a facial recognition software developed by the Tamil Nadu Police. In the viral video, a policeman was seen clicking photographs of women at a bus stop in the area; the women were not even at the protest. No consent was taken by the police to record these women.
Given big technology companies will be the driver behind this tech, here’s a look at what their stance has been around the use of facial recognition technology.
Pichai’s remarks are not the first time that Google has shown a slightly temperate approach towards the use of facial recognition. In a blog post in 2018, which focused on the use of Artificial intelligence for good, Kent Walker SVP of Global Affairs at Google wrote that tech like facial recognition still needs careful consideration to ensure its use avoids “abuse and other harmful outcomes”.
“Google Cloud has chosen not to offer general-purpose facial recognition APIs before working through important technology and policy questions,” the post said. But, interestingly, Google does use facial recognition across products like Google Photos, which is offered to all users. Google Cloud competes with Amazon Web Services and Microsoft Azure, both of which offer facial recognition tools as part of their services.
Amazon and Rekognition
One of the biggest name in face recognition segment is Amazon, which offers its ‘Rekognition’ tool as part of its AWS services to law enforcement agencies as well as regular clients. It has defended all criticism against Rekognition, claiming the technology has several good use cases, including prevention of of human trafficking and the ability to find missing children. Last year, Amazon revealed that Rekognition can now detect ‘fear’ and claimed it was improving accuracy regarding gender classification and age range estimation.
Any regulation would impact Amazon the most. This is also why last September, Amazon CEO Jeff Bezos revealed that the company was working on its own laws for facial recognition and its use, which it would share with US lawmakers. “It makes a lot of sense to regulate that,” Bezos was quoted as saying.
Amazon also published five guidelines for the usage of this technology, the first of which states that the tools “should always be used in accordance with the law, including laws that protect civil rights,” and acknowledged that in some cases the law “may restrict” the use of facial recognition.
The principles also add that “human review” is needed when these tools are being used “for law enforcement purposes,” to ensure that there is no violation of human rights. It also recommends a 99 per cent confidence score threshold when using this for law enforcement.
Further, it wants law enforcement agencies to be transparent in how they use the technology and that there should be a “notice when video surveillance and facial recognition technology are used together in public or commercial settings”.
In March 2019, several researchers wrote an open letter questioning the accuracy of these tools. In the letter the researchers claimed there were no laws to audit the use of this technology, and that Amazon had not disclosed customers or revealed “error rates” for different demographics, which remains a concern.
Amazon on its part has disagreed with earlier research and claimed that the researchers used “an outdated version” of its facial recognition tool. In 2018, the American Civil Liberties Union (ACLU) showed that Amazon’s facial recognition system had falsely matched the faces of 28 Congress members against police mugshots, which again raised questions about its accuracy.
Microsoft and Facial Recognition
Microsoft also offers facial recognition as part of its Azure cloud services. At the same conference, Brad Smith, the company’s President and Chief Legal Officer, gave the example of NGOs using facial recognition to find missing children to argue that the tech can be used for good too. “I’m really reluctant to say let’s stop people from using technology in a way that will reunite families when it can help them do it,” Smith said.
“The second thing I would say is you don’t ban it if you actually believe there is a reasonable alternative that will enable us to, say, address this problem with a scalpel instead of a meat cleaver.”
Microsoft, however, had come under criticism when it posted that it was ‘proud’ to offer Azure cloud services, which included “facial recognition and identification” to the US Immigration and Customs Enforcement (ICE). It had later said it did not support the use of this technology to separate children from parents.
Like Amazon, Microsoft has also tried to outline some principles it believes should be kept in mind when using facial recognition. In 2018, Smith had written a detailed blog post addressing concerns around facial recognition and the need for regulation.
“Unless we act, we risk waking up five years from now to find that facial recognition services have spread in ways that exacerbate societal issues. By that time, these challenges will be much more difficult to bottle back up,” he had written.
The post also said that Microsoft and “other tech companies need to start creating safeguards to address facial recognition technology,” and that it should be developed and used carefully, acknowledging fears of violation of privacy and mass surveillance by the government.
Similar to Amazon, Microsoft also announced six principles regarding the use of facial recognition, arguing that it was necessary to address concerns around bias as well.
The principles are
- We will work to develop and deploy facial recognition technology in a manner that strives to treat all people fairly.
- Transparency. We will document and clearly communicate the capabilities and limitations of facial recognition technology.
- Accountability. We will encourage and help our customers to deploy facial recognition technology in a manner that ensures an appropriate level of human control for uses that may affect people in consequential ways.
- Non-discrimination. We will prohibit in our terms of service the use of facial recognition technology to engage in unlawful discrimination.
- Notice and consent. We will encourage private sector customers to provide notice and secure consent for the deployment of facial recognition technology.
- Lawful surveillance. We will advocate for safeguards for people’s democratic freedoms in law enforcement surveillance scenarios, and will not deploy facial recognition technology in scenarios that we believe will put these freedoms at risk.
Apple and Face ID
One area where Apple uses facial recognition is with its Face ID feature, which is part of the newer iPhones and iPads. Face ID can recognise the user and unlock the phone or tablet, and relies on a series of complex hardware and software to ensure high accuracy.
A crucial part of Apple’s Face ID is all data related to this is stored on the device itself and never sent to the cloud, in line with Apple’s philosophy of ensuring that user privacy remains paramount. When Apple first introduced the technology, questions were raised around it security and privacy, and the company had later put out detailed support page explaining how it actually worked.
The support page explains that Face ID data, which includes “mathematical representations of your face” is “encrypted and protected by the Secure Enclave,” of the device. The data is updated over time to ensure that Face ID can continue natural changes overtime using complex neural networks.
The crucial bit is that this Face ID “data does not leave the device”. The only time such data is sent to Apple is when the user decides to do so as part of Face ID diagnostic data to AppleCare, and that too can be reviewed by the user.
Apple’s Face ID is also relying on complex neural networks to recognise changes to a user’s appearance in order to remain accurate over time. “We trained this network with over 2 billion images including infrared and depth images collected in extensive studies. And of course, they were all done with users informed consent,” Kaiann Drance, Apple’s VP for product marketing had told indianexpress.com earlier.
Apple had said the images came from across the world in order to ensure diversity in gender, age, ethnicity and other factors to represent a wide range of people and that this was an ongoing study. The company also says that Face ID cannot be spoofed thanks to the neural networks, even with 3D masks or basic pictures of the user, like it has been observed with face unlocking on some other phones.
Interestingly, Apple has faced questions over the use of facial recognition in its stores after a New York teen sued the company for $1 billion. Ousmane Bah claimed he was wrongly arrested because an Apple Store identified his face in a theft that he did not commit. Apple, however, denied the use of facial recognition in its stores.
Facebook and facial recognition
In April 2018, when Facebook CEO Mark Zuckerberg had told the US joint meeting of the Senate Commerce and Judiciary committees that companies should get “affirmative consent” when using technology such as facial recognition. But Facebook’s own track record on user privacy has not inspired much confidence, given the recent scandals that have come to light.
Facebook has offered facial recognition tools for sometime now , though in 2019 it rolled out the feature to all users. In 2017, Facebook had announced it would be testing the feature for purposes of ‘account safety’ and verification.
The feature is used to analyse photos and videos where Facebook thinks the user might be present and creates a “unique number” called template. If face recognition as an option is turned on in the settings, this template is compared to other photos, videos and other places where the camera is used (like live video) to recognise if the user appear in that content.
Facebook says it does not share this ‘template’ with anyone. The face recognition setting is turned off by default, though you can turn it on depending on your preference. The company also claims the feature helps protect users and others “impersonation and identity misuse, and improve platform reliability.” The setting only applies to those who are above 18.
© IE Online Media Services Pvt Ltd