View Full Version : New AI can guess whether you're gay or straight from a photograph

09-09-2017, 05:28 AM
New AI can guess whether you're gay or straight from a photograph


"An algorithm deduced the sexuality of people on a dating site with up to 91% accuracy, raising tricky ethical questions"

"The study from Stanford University – which found that a computer algorithm could correctly distinguish between gay and straight men 81% of the time, and 74% for women – has raised questions about the biological origins of sexual orientation, the ethics of facial-detection technology, and the potential for this kind of software to violate people’s privacy or be abused for anti-LGBT purposes.

The machine intelligence tested in the research, which was published in the Journal of Personality and Social Psychology and first reported in the Economist, was based on a sample of more than 35,000 facial images that men and women publicly posted on a US dating website. The researchers, Michal Kosinski and Yilun Wang, extracted features from the images using “deep neural networks”, meaning a sophisticated mathematical system that learns to analyze visuals based on a large dataset.

The research found that gay men and women tended to have “gender-atypical” features, expressions and “grooming styles”, essentially meaning gay men appeared more feminine and vice versa. The data also identified certain trends, including that gay men had narrower jaws, longer noses and larger foreheads than straight men, and that gay women had larger jaws and smaller foreheads compared to straight women."

"The machine’s lower success rate for women also could support the notion that female sexual orientation is more fluid"

"It’s easy to imagine spouses using the technology on partners they suspect are closeted, or teenagers using the algorithm on themselves or their peers."

Deep neural networks are more accurate than humans at detecting sexual orientation from facial images


" Description: We show that faces contain much more information about sexual orientation than can be perceived and interpreted by the human brain. We used deep neural networks to extract features from 35,326 facial images. These features were entered into a logistic regression aimed at classifying sexual orientation. Given a single facial image, a classifier could correctly distinguish between gay and heterosexual men in 81% of cases, and in 74% of cases for women. Human judges achieved much lower accuracy: 61% for men and 54% for women. The accuracy of the algorithm increased to 91% and 83%, respectively, given five facial images per person. Facial features employed by the classifier included both fixed (e.g., nose shape) and transient facial features (e.g., grooming style). Consistent with the prenatal hormone theory of sexual orientation, gay men and women tended to have gender-atypical facial morphology, expression, and grooming styles. Prediction models aimed at gender alone allowed for detecting gay males with 57% accuracy and gay females with 58% accuracy. Those findings advance our understanding of the origins of sexual orientation and the limits of human perception. Additionally, given that companies and governments are increasingly using computer vision algorithms to detect people’s intimate traits, our findings expose a threat to the privacy and safety of gay men and women. "

09-09-2017, 04:47 PM
The devil is in the details. Here they check out.

Gay and straight were represented in equal numbers.

Given how lousy a lot of studies can be nowadays, I had to check; I could build an AI that is 90% accurate too if it was a simple random assortment of people just by guessing straight on all of them.

09-09-2017, 11:24 PM
Is this any more accurate than by assessing finger length comparisons?

Any neural network type comparison is within a black box.
That is - you cannot possibly know on what basis the assessment itself is being made.

You can estimate what is going on by playing with the training materials provided and seeing what happens if you teach the neural network using different subsets of the training database.
Which reveals the most important point. The assessment is only as good as the training material and the way in which it is used.

For example, if one ethnic group is excluded and you don't realize this and try to evaluate someone in that ethnic group, the results may be anomalous.