Ethical concerns related to the use of voice technologies
The ethical use of voice technologies, such as speech and voice recognition, is becoming more important every day. Devices such as smart speakers, smartphones or smartwatches collect massive amounts of data from users thanks to the wide range of activities they allow (e.g., asking questions, setting reminders, checking bank accounts, accessing calendars, etc.). This data, as you might imagine, is often personal or private by nature. Companies offering services through these gadgets now have to assure not only a legal processing of user’s data but also an ethical one.
The above issue is not the only one that concerns ethics. The poor understanding average users have on how voice technologies work (e.g., what data they record, how their behaviour is audited, to what extent their responses are explainable, etc.) also concerns ethics, as some users (e.g., elders) have a tendency to anthropomorphise voice assistants and reveal more details about their lives than necessary without being aware of the consequence this may pose.
We briefly describe below some of the main scenarios related to the use of voice technologies that could carry ethical implications.
- Data ownership: In 2015, the American authorities requested access to all recordings from an Amazon Echo device to investigate the murder case of a citizen called Victor Collins. The question is: who is the actual owner of the data recorded by the device? [Ref1]
- Societal biases: Data used to train machine learning applications like those present in smart speakers and similar devices could learn societal biases. For instance, the study “Biased bots: Human prejudices sneak into AI systems” showed that in typical training data used for machine learning African American names are often used alongside unpleasant words (e.g., “hatred”, “poverty”, “ugly”), while European American names are often paired with words such as “love”, “lucky” or “happy”. [Ref2]
- Anthropomorphisation of voice technologies: As mentioned below, the intention of tech companies to provide voice assistants with human-like personalities leads users to anthropomorphise them and, thus, share an excessive amount of sensitive information. This phenomenon, which ascribes human attributes to machines, increases the risk of deteriorating human self-determination and leads users to overestimate the system’s capabilities. [Ref3]
- Niche in sensitive fields: AI-based technologies like voice and speech recognition have found a niche in fields rich in personal data. The best example is probably personal health, where information such as the history of the patient medical diagnoses, diseases or interventions, medications prescriptions, tests results, behavioural patterns, and sexual life, to mention a few, is recorded on a daily basis for a variety of purposes. For instance, some ICT tools for medical practitioners allow for the development of decision support systems to improve the individual capacity of medical professionals. The main issue with this type of system — at least in the medical field — is that decision-making becomes a spatially distributed process, where multiple actors (e.g., medical specialists, nurses, pharmacologists, etc.) converge and thus, gain access to data. [Ref4]
- Absence of human intervention: Algorithms are sometimes used for delicate tasks such as determining how much an individual should pay for insurance or filter candidates applying for a position. Although these tasks may be performed more efficiently with the help of AI, they still require strict supervision of human beings. Unfortunately, some service providers skip this requirement. For instance, several speech recognition solutions on the market claim to successfully identify fraudulent call centre conversations and even criminals pretending to be customers. Without enough human intervention to verify that they are indeed criminals or scammers, these systems could end up wrongly labelling legitimate users as such.
Initiatives to regulate ethics in voice technologies
Fortunately, initiatives aimed at regulating ethics in AI-based technologies (e.g., speech and voice recognition) exist at the European level. This is the case of the European Commission’s Ethics Guidelines for Trustworthy AI. According to these guidelines, for AI to be considered trustworthy, it must comply with the following principles [Ref4]:
- Human action and supervision: AI should empower humans and enabled them to make informed decisions while ensuring proper oversight mechanisms. This goal can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches.
- Technical security and robustness: AI systems should be resilient and secure to ensure the prevention or minimisation of unintentional harm.
- Data and privacy management: Privacy, data protection, and adequate data management should be ensured, considering the quality and integrity of the data.
- Transparency: AI systems and their decision-making mechanisms should be explainable (if not black boxes) to the users concerned, keeping in mind their capabilities and limitations.
- Diversity and non-discrimination: AI systems should be accessible to everyone and avoid biases of any kind.
- Social and environmental wellbeing: AI systems should benefit all human beings, be environmentally friendly, and consider their social impact.
- Accountability: The responsibility and accountability of AI systems and their outcomes should be ensured through adequate mechanisms.
In the case of voice technologies, the principles above can be fulfilled, for instance, by designing the technologies behind voice assistants (Speech-to-Text, Natural Language Understanding, etc.) to preserve user privacy and to enable interactions in several languages and dialects, as is the case of COMPRISE, and not to make decisions that may affect the user.
Within this context, there are other European or national initiatives that aim to regulate and study the ethics in the use of various technologies. These include the European Observatory on Society and Artificial Intelligence, a project created under the H2020 programme that offers tools to help people better understand the impact AI technologies have across the EU, or the French National Pilot Committee for Digital Ethics (CNPEN), that issued a consultation on “Ethical issues of conversational agents” in 2020.
There is great uncertainty as to what should be considered an ethical use of technologies, although some points are clear. No technology should produce negative consequences to the users, neither in terms of their privacy nor in terms of the security of their data or how they perceive themselves as individuals.
To a large extent, the task of achieving ethical use of voice technologies rests with the developers in charge of designing, training and integrating the models that enable voice devices to function. Of course, it is up to multiples bodies (e.g., the European Commission) to work on standards, guidelines, and good practices that determine the minimum requirements developers and other stakeholders should maintain to be considered as making ethical use of voice technologies.
[Ref1] Social and Ethical Concerns of Smart Voice-Enabled Wireless Speakers
[Ref2] Biased bots: Human prejudices sneak into AI systems
[Ref3] Analysing ethical challenges of digital advertising for the Amazon Echo voice assistant
[Ref4] Ethics Guidelines for Trustworthy AI
Legal specialist and Project Manager at Rooter
Legal Consultant at Rooter
Developer survey: Since you are here and interested in our project, could you please spare a moment to share your concerns and answer 12 questions related to developing voice-enabled apps.