Advanced Bioinformatics

Posted by Junjie Hua on October 1, 2017

Why

  • If you can’t speak in a classroom, venue, etc., you want to interact without speaking out

    fig2

  • Tell the story without pronunciation

    fig3

Background

  • Researchers at Oxford University and Google DeepMind have developed artificial intelligence (AI) that reads the movement of a person’s lips through training with thousands of hours of content broadcast by the BBC.
  • Little research on facial muscle activity
  • Difficult to discriminate micro electromyographic activity
  • Obtained action potential is a continuous waveform that fluctuates in time and is difficult to analyze usually.

How

Different mouth shape when speaking different vowels

fig6

By collecting masticatory EMG data when speaking different vowels(chin–plus,cheek–minus,forehead–reference)

fig5

Result

  • I analyzed the EMG data by analyzing envelope of different signals, different signals can be easily distinguished

    fig4

  • By using a simple neural network from MATLAB, different EMG signals of different vowels can be correctly distinguished.

    fig1