Fujitsu LiveTalk is a software that, for situations in which multiple people share information, such as meetings or classroom settings, recognizes a speaker’s speech, immediately converts it into text, and displays it on multiple PC screens.
Amongst some trendsetting innovations straight from the Fujitsu Labs in Japan, the Fujitsu LiveTalk solution was one of the highlights experienced at the Fujitsu booth at CEBIT 2017.
This software from Fujitsu has been developing simultaneous interpretation into 19 languages and makes communication in foreign languages blindingly easy. It is a communication tool that translates speaker’s speech into text via speech recognition and displays the content on a PC, tablet, or smartphone screen in real time. The spoken text is smoothly and reliably translated into any language available.
Focusing on the issue of communication with people with hearing disabilities, and based on the technologies of Fujitsu, which has been advancing initiatives in universal design, Fujitsu Social Science Laboratory developed and commercialized LiveTalk, a participatory communications tool for people with hearing disabilities that creates a smoother and more natural communications environment.
We interviewed Mr Michael Erhard head of communication of Fujitsu Central Europe.
- What are the features of Live Talk?
“LiveTalk was originally designed for people with hearing disabilities in Japan. It was developed in a second step for simultaneously translations in many languages and at the moment support 19 languages as for example English, Chinese , Korean, French, Spanish, Arabic, Russian. It is a learning system and it is quick to catch up with the other languages due to its autodidactic skills based supported by artificial intelligence.”
- When it will be available in the market?
“It is the first time we presented this system in Europe. At the moment is available in the Japanese market and we plan to be present in the other markets”.
- What are the ideal customers for this product?
“In the first step some 100 organisations in Japan use this system in order to communicate with disabled people, in particular with people with hearing disabilities, for example hospitals, government offices, businesses and educational institutions. These institutions have to communicate with people with hearing disabilities and they had to use the sign language so only few people were able to communicate. Live talk solved this problem. In particular, the system allows for keyboard input as well as speech input, so that hearing-impaired people can participate fully, alongside speakers of other languages.
Even without a human transcriber or other assistance, which until now has been required when hearing-impaired and hearing people work or learn in the same environment. Another possible application is for example in multinational organisations where the system is able to translate simultaneously in different languages even if multiple participants speak at the same time. ”
“It is very easy to use, because it can be used directly in the system and no additional hardware is necessary. It support every device with a microphone like PC, tablet, or smartphone. The spoken text is smoothly and reliably translated into any language available. “
- What about the problem related to pronunciation?
“As I explained this system is a learning system with artificial intelligence in the background. The speech is converted into text and displayed on PC screens in real time with speech recognition using handheld and headset mics. If there are any mistakes in the conversion of speech into text, the system allows for keyboard input as well as speech input on the PC. When I tried the system the first time it was poor not so perfect, then I tried a second and a third time and the system got better. Basically a learning system it means the recognition of the voice will be better and better with practice. ”
Extract of the interview: