[Call for Chapters] Quality Assurance and Assessment Practices in Translation and Interpreting
Editors
Elsa Huertas-Barros University of Westminster Sonia Vandepitte Universiteit Gent Emilia Iglesias-Fernández Universidad de Granada
Call for Chapters
Proposals Submission Deadline: March 30, 2017
Full Chapters Due: July 30, 2017
Submission Date: November 30, 2017
Introduction
Since translation and interpreting established themselves as professions and as academic disciplines, both the industry and the academic setting have evolved swiftly as a consequence of the significant changes affecting the field (Drugan, 2013: 185; Saldanha and O’Brien, 2014: 95) and the innovative approaches and concepts linked to the disciplines in recent decades. In the workplace, the development of translation memories and machine translation have led to new translation quality assurance practices where translators have found themselves checking not only human translation, but also machine translation outputs. And in training settings, the new developments have inevitably resulted in new forms of feedback and assessment that are replacing more traditional ways to judge students’ performance in translation and interpreting training. They include, for instance: diagnostic, summative and formative assessment, self-assessment, reflective diaries, translation commentaries and formative feedback by means of peer and self-assessment tasks. In this context, the notions of revision and interpersonal competences have gained great importance, with international projects such as OPTIMALE recognising them as high priorities in the labour market, and many translation scholars calling upon revision training and the introduction of collaborative learning in translation education and training (Hurtado Albir, 1999/2003, 2007, 2015; Kiraly 2000; González Davies, 2004; Kelly, 2005; Klimkowski, 2006; Way, 2008; Huertas Barros, 2011, 2013; Galán Mañas and Hurtado Albir, 2015, Huertas Barros and Vine, 2016; Lisaité et al., 2016, among others). In particular, the notion of the peer feedback as a form of collaboration and its positive impact on translation competences (Lisaité et al., 2016; Vandepitte and Lisaité, 2016; Flanagan and Heine, 2015; Heine, 2016) has meant incorporating Translation Quality Assessment (TQA) into teaching, where practices of revision can be linked to a feedback process in the industry (i.e. students are introduced to professional quality standards, quality control criteria and benchmarks recognised at international level). The ongoing research project “Establishing competence levels in translation competence acquisition (written translation)” carried out by PACTE can also be seen as a first but solid step in this direction, as it will serve as a guide towards the establishment of criteria for professional quality control. Quality assessment plays, therefore, an essential role in both professional and academic settings. In the industry context, it is mainly linked to the quality of the translation and interpreting products and services. In education and training, quality assessment has two main roles, i.e. focusing on the translation and interpreting processes and on trainees’ learning needs (formative function) and evaluating the knowledge acquired or grading students’ achievements (summative function).
Quality is also a central notion in interpreter education, and Interpreting Quality Assessment (IQA) is one of the most robust and prosperous fields in Interpreting studies. From its outset, IQA has been concerned with identifying a set of verbal and nonverbal criteria (Bühler, 1986; Kurz, 1993/2002, to name just a few) and determine their weight in the evaluation of conference interpretation and interpreters. The importance that different groups of interpreting users attach to certain criteria (Gile, 1991; Kurz & Pöchhacker, 1995; Collados Aís et al., 2007/2011; Chiaro & Nocella, 2004; Zwischenberger & Pöchhacker 2010) is useful in informing the design and development of consistent criteria. But findings show that rating criteria are difficult to separate (Collados Aís, 1998; Pradas Macías, 2006; Collados Aís, et al., 2007; Iglesias Fernández, 2013), since some are correlated constructs (Clifford, 2005; Yeh & Liu, 2008). This lack of consistent rating criteria (Collados Aís & García Becerra, 2015) precludes attempts at their operationalization, and, consequently, assessment in interpreting still lacks test reliability (Sawyer, 2004; Angelelli & Jacobson, 2009; Liu, 2015). Nevertheless, interpreting assessment has experienced great progress in terms of tools and resources. The use of rubrics, portfolios, reflective, deliberate and collaborative practice through technology enhanced interpreting training platforms offers a myriad ways of interpreting practice (see ORCIT, Speechpool, Interpreters in Brussels Practice Group, amongst others), feedback (InterpretimeBank) and online training. However, the need still exists for a better understanding of the construct underlying the criteria as well as reliable measurements which inform the design of tests, tools and resources used to assess students and provide them with feedback by trainers or their own peers.
Providing students with valuable feedback and implementing effective forms of assessment and practices are therefore essential not only for maximising the teaching process, but also for enhancing students’ learning experience. Translation / interpreting trainees expect information about industry assessment and revision practices and will need training to become future assessors themselves in their roles as revisers and reviewers, for instance (as provided in the European norm EN-15038:2006 and in the new international standard ISO 17100:2015). In other words, trainees now practice how to observe translation / interpreting performances and translated / interpreted texts / discourses and how to tactfully communicate to a peer how the process or the end result could be improved (feedback). In addition, they are trained to assign a certain mark out of a scale to a translation / interpreting performance (assessment).
Objective
This contribution will provide a comprehensive insight into some of the latest research developments in assessment practices in academic and professional settings, and will shed a light on one of the main concerns in the training of the future generations of translators and interpreters. The empirical research and case studies which will form the basis of the book will focus on the behaviour and good assessment practices of both translation and interpreting practitioners and educators, which will provide trainees with information about industry assessment practices and will also inform the way translation and interpreting trainees should be trained. This publication will, therefore, be a unique and ground-breaking contribution to Translation and Interpreting Studies.
Target Audience
This book will be a reference source for students, researchers, academics, and professional translators and interpreters in the fields of Translation and Interpreting studies. In particular, as well as enhancing students’ learning, the recommendations offered in the book will address those with an interest in pedagogical research to introduce the new trends in feedback and assessment into their programmes and maximise teaching and learning methodologies and practices. Emerging interpreting industries such as remote interpreting in legal and health services can tap into the book to upgrade their own feedback and assessment practices.
This publication will notably contribute to the development of both fields of interpreting and translation, with a wide range of empirical case studies demonstrating innovation, experimental rigour and practical ideas and solutions to Translation and Interpreting scholars, educators and practitioners. This book will also play an essential role in proposing practical and empirically-based ways for universities and the industry to overcome traditional barriers to learning by promoting student and competence-centred training and effective ways to assess translation and interpreting quality.
Recommended Topics
Observing, revising, giving feedback and assessing are issues where many of the debates on translation and interpreting training and practice intersect. We welcome empirical contributions about competence assessment and quality both in translation and interpreting training and in the industry, whether they take a behavioural, sociocultural, emerging or other approach. The new findings will either support existing theoretical frameworks or point at refinements of revisions of the present scholarly work. With the aim of exploring the key theme further, we invite contributors to consider, but not limit themselves to, the following topics:
• Innovations in feedback and assessment in the translation and interpreting industry:
o Application requirements and assessments
o Employee training
o Student trainee training in the industry: training period, work placements, etc.
o Translation Quality Assessment: process/product, Dynamic Quality Framework (DQF) Multidimensional Quality Metrics (MQM), etc.
o Quality in conference and simultaneous interpreting
o o Feedback and assessment practices in innovative forms of translation and interpreting: e.g. audiovisual translation, audio description, videoconference- and telephone- based remote interpreting
• Innovations in feedback and assessment in translation and interpreting education and training:
o Assessment criteria (adequacy/acceptability, individual learning paths), methods and instruments
o Forms of assessment: diagnostic assessment, summative, formative assessment, self-assessment, etc.
o Process-oriented vs product-oriented assessment models
o Peer feedback and assessment models
o Students’ reception and repercussion of different forms of feedback: formative feedback, directive feedback, facilitative feedback, teacher vs peer feedback, etc.
o Feedback and assessment in innovative forms of translation and interpreting: e.g. audiovisual translation, audio description and videoconference- and telephone- based remote interpreting
o Feedback and assessment in technology-enhanced online training
Preference will be given to papers that address the following questions: Which innovations in feedback and assessment practices in translation / interpreting training yield empirically tested better results than the traditional methods? How can feedback and assessment studies methodologies, such as surveys about feedback and assessment methods, experiments, and others be improved? In particular, how can a higher degree of quality criteria consistency, a lower degree of interrater variability and a higher degree of reliability in test designs be achieved?
Submission Procedure
Researchers and practitioners are invited to submit on or before March 30, 2017, a chapter proposal of 1,000 to 2,000 words clearly explaining the mission and concerns of his or her proposed chapter. Authors will be notified by May 10, 2017 about the status of their proposals and sent chapter guidelines. Full chapters are expected to be submitted by July 30, 2017 and all interested authors must consult the guidelines for manuscript submissions at http://www.igi-global.com/publish/contributor-resources/before-you-write/ prior to submission. All submitted chapters will be reviewed on a double-blind review basis. Contributors may also be requested to serve as reviewers for this project.
Note: There are no submission or acceptance fees for manuscripts submitted to this book publication, Quality Assurance and Assessment Practices in Translation and Interpreting. All manuscripts are accepted based on a double-blind peer review editorial process.
All proposals should be submitted through the E-Editorial DiscoveryTM online submission manager.
Publisher
This book is scheduled to be published by IGI Global (formerly Idea Group Inc.), an international academic publisher of the “Information Science Reference” (formerly Idea Group Reference), “Medical Information Science Reference,” “Business Science Reference,” and “Engineering Science Reference” imprints. IGI Global specializes in publishing reference books, scholarly journals, and electronic databases featuring academic research on a variety of innovative topic areas including, but not limited to, education, social science, medicine and healthcare, business and management, information science and technology, engineering, public administration, library and information science, media and communication studies, and environmental science. For additional information regarding the publisher, please visit www.igi-global.com. This publication is anticipated to be released in 2018.
Important Dates
March 30, 2017: Proposal Submission Deadline
May 10, 2017: Notification of Acceptance
July 30, 2017: Full Chapter Submission
September 30, 2017: Review Results Returned
November 15, 2017: Final Acceptance Notification
November 30, 2017: Final Chapter Submission
Inquiries
Elsa Huertas Barros
University of Westminster
E.Huertasbarros@westminster.ac.uk
Sonia Vandepitte
Universiteit Gent
Sonia.Vandepitte@UGent.be
Emilia Iglesias Fernández
Universidad de Granada
emigle@ugr.es