Facial Analysis of CogX Speakers
Facial Analysis of CogX Speakers
The has over 600 speakers. We can get an approximate analysis of the profile of speakers by analyzing the speaker photos page using different neural network classifiers to extract age, gender and mood information.
I start by scraping the links to the speaker pages, and then importing the speaker photo from each page
data=Rest[Union[Flatten[Import[#,"Images"]&/@Select[Import["https://cogx.co/speakers/","Hyperlinks"],StringMatchQ["*speakers*"]]]]];
Here is a sample of the faces:
RandomSample[data,5]
,
,
,
,
We can now apply an age-predicting neural network to each image. We can see that there is a bias towards 30-something speakers:
ages=Classify["FacialAge",data];Histogram[ages,AxesLabel{"Age",None},PlotLabel"Age Profile of CogX Speakers",BaseStyle16]
Out[]=
The speaker list has a modest gender balance with 38% of speakers being female:
In[]:=
genders=Classify["FacialGender",data];PieChart[Counts[genders],ChartLabelsAutomatic,BaseStyle16,PlotLabel"Gender Balance of CogX Speakers"]
Out[]=
In[]:=
N[Counts[genders]/Total[Counts[genders]]]
Out[]=
Male0.615142,Female0.384858
Generally speakers have provided photographs where they appear happy:
In[]:=
moods=Classify["FacialExpression",data];PieChart[KeyMap[#/.Indeterminate""&,Counts[moods]],ChartLabelsAutomatic,BaseStyle16,PlotLabel"Mood Distribution of CogX Speakers"]
Out[]=