Remote simultaneous interpreting (RSI) and computer-assisted interpreting (CAI) tools powered by automatic speech recognition (ASR) and artificial intelligence (AI) are both technological developments in the interpreting profession propelled by the COVID-19 pandemic. Given the additional complexity of operating a user interface (UI) during simultaneous interpreting, we may consider UI design and overall system usability to be of crucial importance for successful RSI. However, to our knowledge, no previous article presented the evaluation of an RSI platform from the perspective of its usability and users’ requirements. In this article, we present a recent evaluation study of the RSI platform SmarTerp. Reflecting a general trend in the industry, SmarTerp is one of the first RSI systems to integrate an ASR/CAI tool. This paper presents the first evaluation study of a CAI-tool integrated RSI platform. The study drew on the usability engineering evaluation methods of expert appraisal and field trial. Eight high-level conference interpreters tested SmarTerp in a simulated RSI conference based on a real debate at the European Parliament. The interpreters were divided into four booths (DEU, FRA, ITA, SPA) and ENG was the relay language. After the test, they completed three tasks gathering feedback from the perspective of (1) the individual interpreter, (2) the booth, and (3) the group of professionals. After presenting the SmarTerp project, the paper defines the concept of ‘usability’ and details the study method. The discussion of the results sheds light on interpreters’ needs and requirements for RSI systems.
remote simultaneous interpreting, computer-assisted interpreting, automatic speech recognition, interpreting technology evaluation research, usability