In a recent report, the Human Technology Institute at the University of Technology Sydney (UTS) describes a model law for facial recognition technology that would protect against bad uses of the technology while also encouraging innovation for the good of society.
Facial recognition technology was not taken into consideration when Australian laws were written. The report, which was led by UTS Industry Professors Edward Santow and Nicholas Davis, suggests changes to Australian law in order to keep up with threats to privacy and other human rights.
In the past few years, facial recognition and other remote biometric technologies have become very popular. This has led to concerns about privacy, widespread surveillance, and unfairness when the technology makes a mistake, especially for women and people of color.
Following a consumer advocacy group’s investigation in June 2022, it was discovered that several major Australian retailers were using facial recognition technology to identify customers entering their stores, which greatly alarmed the public and prompted calls for tighter regulation. Facial recognition law reform has also been widely demanded, both domestically and abroad.
This most recent report answers those requests. It acknowledges that our faces are unique in that humans rely heavily on one another’s faces to recognize and communicate with one another. Because we depend on technology so much, our human rights are more likely to be broken when it is used wrongly or too much.
When facial recognition software is well-designed and regulated, there can be tangible advantages that make it easier to identify people quickly and widely. People who are blind or have vision problems frequently use technology, making the world more accessible to those people, “Professor Santow, the current co-director of the Human Technology Institute and a former Australian Human Rights Commissioner, said.
A risk-based model law for facial recognition is suggested in this report. “Making sure facial recognition technology is created and utilized in ways that uphold people’s fundamental human rights should be the first step,” he said.
“Our current legal system has flaws that have led to a sort of regulatory market failure. Due to inadequate consumer protection, many reputable companies have abandoned facial recognition services. The companies that are still in this space are not required to prioritize the fundamental rights of those who will be impacted by this technology. ” Professor Davis, co-director of the Human Technology Institute and a former member of the executive committee at the World Economic Forum in Geneva, made this statement.
He said that many government and intergovernmental organizations as well as independent experts have raised concerns about the risks that come with using facial recognition now and in the future.
This report requests that Australian Attorney-General Mark Dreyfus take the helm of a comprehensive reform initiative for facial recognition. To begin with, a bill based on the report’s model law should be introduced into the Australian Parliament.
The report also suggests that the Office of the Australian Information Commissioner be given regulatory power to control the creation and use of this technology in federal jurisdiction, with a coordinated strategy in state and territorial jurisdictions.
The model law describes three levels of risk to the community as a whole and to the people who may be affected by the use of a specific facial recognition technology application.
According to the model law, anyone who creates or uses facial recognition technology must first determine the degree of human rights risk associated with it. The public and the regulatory body can then contest that assessment.
Based on the risk assessment, the model law then lays out a full list of legal requirements, restrictions, and bans.
The Human Technology Institute at the University of Technology Sydney’s Nicholas Davis, Edward Santow, and Lauren Perry all worked on the report Facial Recognition Technology: Toward a Model Law.