Pre-Conference Workshops

The workshops will be held as part of EALTA 2023.
EBB Rating Scale Development Bootcamp

Workshop Topic

An empirically derived, binary-choice, boundary-definition scale (EBB scale; Turner, 2001; Turner& Upshur, 1996; 2002; Upshur &Turner, 1995) consists of an ordered set of binary questions which guides raters through the decision-making process. As opposed to scales based on theorised or hypothesised language levels, EBB scales are empirically developed with reference to sets of authentic learner or test taker performance samples. Research on the original EBB scales has demonstrated their efficiency, their high degree of inter-rater reliability, and their accessibility for raters of varied backgrounds. While some scholars have attested to the benefits of EBB scales(e.g., Hirai & Koizumi, 2013; Plakans, 2013; Ewert & Shin, 2015), language assessment professionals are less familiar with the format and development process of this scale compared to other types, as well as the potential of this scale option for their contexts.This proposed workshop would give participants the chance to learn more about EBB scales, to try using them, and to draft their own, making use of authentic performance samples.

Beverly Baker has worked and published widely in the areas of language teacher development and language assessment.She currently holds the Research Chair in Language Assessment Literacy at uOttawa’s Official Languages and Bilingualism Institute. Beverly is a founding member of the Canadian Association of Language Assessment (CALA) as well as the Language Assessment Literacy SIG of ILTA. She is the 2019 winner of the British Council’s International Assessment Award. Over her career, she has given workshops to more than 1000 teachers, scholars, and educational administrators on language assessment topics.

Heike Neumann works and researches in the fields of second language writing, English for Academic Purposes (EAP), and second language assessment with publications in all these areas. She is a Senior Lecturer in the EAP Program at Concordia University, where she coordinates the development and grading of common exams for 400 EAP students annually and was involved in the development and validation of the Concordia Comprehensive English Placement Test (ConCEPT). She is a founding member of the Canadian Association of Language Assessment and has served the International Language Assessment Association (ILTA) in various roles. She regularly teaches and delivers courses and workshops on language assessment.

Examining the Dimensionality of the Language Tests Using Confirmatory Factor Analysis in R Statistical Software

Workshop Topic

Participants will be introduced to the concept of dimensionality in language tests. They will learn how dimensionality of language tests can be examined through confirmatory factor analysis.They will also learn how to run different models of confirmatory factor analysis using the “lavaan” package in Randhow to interpret the out put such as what model best represents the factor structure of a language test.

Gholam Hassan Khajavy (PhD) is Assistant Professor of Applied Linguistics at the University of Bojnord, Iran. His main research interests are the psychology of language learning and research methods in Applied Linguistics. He uses advanced statistical procedures such as structural equation modelling, multilevel modelling, and latent growth curve modelling in his research. He is the consulting editor of Educational Psychology (Routledge) and has recently co-edited a special issue on the role of grit among language learners which was published in the Journal for the Psychology of Language Learning. He has conducted several workshops on conducting structural equation modelling and multivariate statistics in different universities. Moreover, he has published in different international journals such as TESOL Quarterly, Studies in Second Language Acquisition, Language Learning, and Contemporary Educational Psychology.

Using Sequential-Categorial Analysis to assess interactional competence

Workshop Topic

There is growing recognition in the field of language testing that on top of the four language skills – listening, reading, speaking and writing – we need to assess the fifth skill, which is a person’s ability to interact, termed their interactional competence (IC). In this workshop I will introduce participants to 1) the concept of IC, 2) the methodology of researching IC, 3) how to develop IC test constructs and IC rating scales. In particular, I will explain the fundamentals of an analytic procedure called Sequential-Categorial Analysis, which combines Conversation Analysis and Membership Categorization Analysis to analyse interaction and IC. At the end of the workshop participants will be able to develop their own IC test constructs and rating scales for their local assessment contexts. No prior knowledge of IC, rubric development, or Sequential-Categorial Analysis is required.

David Wei Dai (PhD) is Lecturer (equivalent tenure-track Assistant Professor) of Clinical Communication at Monash University in Australia. He is Editor for the journal TESOL in Context, Visiting Scholar at University College London, and Nominating Member of the International Language Testing Association. His PhD dissertation on assessing interactional competence has won the 2023 American Association for Applied Linguistics (AAAL) Dissertation Award. David’s research program focuses on interactional competence, language assessment, psychometrics (Many-Facet Rasch Measurement and Classical Test Theory), discourse analysis (Conversation Analysis and Membership Categorization Analysis) and clinical communication. His work has appeared in journals such as Language Assessment Quarterly, Language Teaching Research and Applied Linguistics Review. He is currently working on two monographs on assessment under contract with Peter Lang and Routledge