Researchers at the Massachusetts Institute of Technology (MIT) Media Lab have demonstrated that real-world biases can be unwittingly implemented in artificial intelligence (AI) that informs facial-recognition systems.

Online headlines for online dating video

"We need to tackle biased perceptions amongst teachers, employers, peers, and parents of the suitability of girls and young women to learn science--or learn at all--to pursue scientific careers or to lead and manage in academic spheres," the officials contended.

University of Canterbury in New Zealand professor Simon Brown is developing a neuromorphic computer chip that employs a network of nanoparticles to work like the human brain in hardware.

The MIT Media Lab's Joy Buolamwini is working with IEEE to establish a group to create accountability and transparency standards in facial analysis software.

Researchers at the Swiss Federal Institute of Technology in Lausanne (EPFL) in Switzerland conducted a study involving how people respond to properties available on Air Bn B to better understand how machines can make more "human" decisions.

Our chips are made in a completely different way which bypasses those problems." The European Union-funded Speaker Identification Integrated Project (SIIP) has developed a probabilistic, language-independent identification system that employs a unique Speaker-Identification engine and a Global Info Sharing Mechanism to identify unknown speakers surveilled in lawfully intercepted calls, recorded crime or terror arenas, social media, and any other type of speech source.

SIIP integrates multiple speech-recognition algorithms related to speaker model, gender, age, language, and accent, supporting highly reliable and confident detection while minimizing false positives and false negatives.

The team calculated that one of the most widely used facial-recognition datasets was more than 75-percent male and more than 80-percent white, emphasizing issues about fairness and accountability in AI at a time when investment in and adoption of the technology is quickly moving forward.

"This is the right time to be addressing how these AI systems work and where they fail--to make them socially accountable," says University of Utah professor Suresh Venkatasubramanian.

Guterres called for "concerted, concrete efforts" to debunk stereotypes and biases, such as media representations of researchers and innovators as being primarily male.