Bernstein-Gruppe Components of cognition: small networks to flexible rules: Multi-modal emotion recognition and blind source separation
The immediate goal is to analyze concurrent speech utterances and facial expressions in terms of speaker emotion and intention. Speech and face information will be combined to a multi-modal feature vector and subjected to blind source separation (ICA) analysis. In a different context similar methods were already suggested by the applicant in his Habilitationsschrift [Michaelis 80]. In the longer term, the proposed project is aimed at the automatic recognition of subtly different human interactions (e.g., friendly/cooperative, impatient/evasive, aversive/violent). A second long-term goal is to apply the automatic recognition of emotion states to a neurobiological investigation of the neural basis of emotion. A correlation with results of EEG and MRI investigations can be carried out [Heinzel 05]. The software tools to be developed here would be invaluable in brain imaging (fMRI) of human emotion.