It allows to visualize in real time the movements of our language while we speak
French researchers have developed a system that allows us to visualize, in real time, the movements of our language as we speak. In this way, we can learn to correctly pronounce both our language and a foreign language. The system is based on an enhanced lingual ultrasound supported by an automatic learning algorithm.
French researchers have developed a system that allows us to visualize, in real time, the movements of our language. It is an enlarged lingual ultrasound that, in addition to showing the face and lips, makes the tongue, palate and teeth appear, usually hidden inside the mouth.
The movements of the tongue are captured with the help of an ultrasound probe located under the jaw, and then processed by an automatic learning algorithm that allows piloting a "articulatory talking head" spice.
This system of visual feedback, which allows to better understand and improve human pronunciation, will be very useful for the reeducation of speech therapy or the learning of a foreign language. The results are published in the journal Speech Communication.
Orthophony makes reference both to the correction of the defects of the voice and the pronunciation of the sounds of a language, as well as to the correct pronunciation of the sounds of another language.
The orthophonic reeducation of a person afflicted with joint disorders relies in part on reeducation through exercises, through which the therapist qualitatively analyzes the patient's pronunciation and explains orally, or with the help of schemes, how to use their joints, and especially the tongue, something that we are not generally aware of.
The effectiveness of the orthophonic reeducation is based on the good integration by the patient of the indications provided by the therapist. It is at this moment in which the systems of visual articulatory return allow the patient to visualize in real time their own articulatory movements (especially those of the language), in order to become aware of what they are doing and to correct defects more quickly. of pronunciation.
For years, Anglo-Saxon researchers have developed the technique of ultrasound for the conception of systems of visual return. In these works, the image of the tongue is obtained by placing a probe similar to the one used to obtain the image of a fetus under the jaw. This image, however, is difficult to be exploited by the patient because it is not of good quality and also because it does not give any information on the palate or teeth.
Improved visual return
In the new work, French researchers improve this visual return through a kind of articulatory talking head that is automatically animated in real time from echographic images supported by an algorithm, according to a statement.
This virtual clone of a speaker allows a visualization of the physical process of articulation of a language and its pronunciation much more intuitive and contextualized.
The strength of the new system, unlike the previous ones, is based on the machine learning algorithm in which these French researchers have been working for a few years.