Speakers

EARLI SIG1
Benő Csapó

Benő Csapó is a Professor of Education at the University of Szeged and the head of the Doctoral School of Education, the Research Group on the Development of Competencies, Hungarian Academy of Sciences, and the Center for Research on Learning and Instruction. He was a Humboldt research fellow at the University of Bremen (1989) and at the Center for Advanced Study in the Behavioral Sciences, Stanford, California (1994–95). He was a member of the Problem Solving Expert Groups that devised the assessment frameworks for the 2003 and 2012 OECD PISA surveys and the head of the Technological Issues Working Group in the Assessment and Teaching of 21st Century Skills initiative (2009–2010). He was twice an elected member of the Executive Committee of the European Association for Research on Learning and Instruction (1997–2001) and president of the 12th Biennial Conference for Research on Learning and Instruction (Budapest, 2007). His fields of research include cognitive development, educational evaluation and technology-based assessment.

Assessment for Personalized Learning: The Promise of Technology

Benő Csapó

MTA–SZTE Research Group on the Development of Competencies

University of Szeged, Hungary

When the possibilities for large-scale summative assessment had been exploited and the limitations and negative impacts of high-stakes testing and test-based accountability systems had become clear by the end of the past century, the attention of researchers and developers turned to formative assessment. Several types and models of formative assessment have been proposed for the immediate support of students’ learning, and in the past few decades this direction of development, often termed assessment for learning, has been buttressed by strong theoretical foundations. A common feature of these assessments is the frequent, precise individualized feedback students receive in the process of learning, mostly in classroom contexts. The major challenges for large-scale implementation of these models are the limited availability of high-quality assessment instruments, expenses involved in frequent use, the expertise and teacher time required, and the general organizational and administrative difficulties involved. Technology-based assessment may provide solutions for most of these constraints, but it still requires further research to fit technological solutions to real educational needs. This presentation reviews recent research on technology-based assessment, those studies in particular that support large-scale implementation of assessment for learning models. The current possibilities will be illustrated with a large-scale project that implements an online diagnostic assessment system, eDia in Hungary.

Dragan Gašević

Dragan Gašević is Professor of Learning Analytics in the Faculty of Education at Monash University. Previously, he was a Professor and the Sir Tim O’Shea Chair in Learning Analytics and Informatics in the Moray House School of Education and the School of Informatics at the University of Edinburgh. He served as the immediate past president (2015-2017) of the Society for Learning Analytics Research (SoLAR) and holds several honorary appointments in Australia, Canada, Hong Kong, and USA. A computer scientist by training and skills, Dragan considers himself a learning analyst who develops computational methods that can shape next-generation learning technologies and advance our understanding of self-regulated and social learning. Funded by granting agencies and industry in Canada, Australia, Europe, and USA, Dragan is a recipient of several best paper awards at the major international conferences in learning and software technology. Dragan had the pleasure to serve as a founding program co-chair of the International Conference on Learning Analytics & Knowledge (LAK) in 2011 and 2012 and the Learning Analytics Summer Institute in 2013 and 2014, general chair of LAK’16, and a founding editor of the Journal of Learning Analytics (2012-2017). Dragan is a (co-)author of numerous research papers and books and a frequent keynote speaker.

Can learning analytics offer meaningful assessment

Samuel Greiff

Prof Dr Samuel Greiff is research group leader, principal investigator, and ATTRACT-fellow at University of Luxembourg. He holds a PhD in cognitive and experimental psychology from the University of Heidelberg, Germany (passed with distinction).

Prof Greiff has been awarded several national and international research funds by diverse funding organizations such as the German Ministry of Education and Research and the European Union (overall funding approx. 9.3 M €), is currently fellow in the Luxembourg research programme of excellency, and has published articles in national and international scientific journals and books (>100 contributions in peer-reviewed journals; many of them leading in their field). He has an extensive record of conference contributions and invited talks (>200 talks) and serves as editor for several journals, for instance as editor-in-chief for European Journal of Psychological Assessment, as associate editor for Thinking Skills & Creativity, and as guest editor for Journal of Educational Psychology, Computers in Human Behavior, or Journal of Business & Psychology. He has a regular record of ad-hoc reviewing for around 40 different journals and currently serves on five editorial boards.

He has been and continues to be involved in the 2012, 2015, and 2018 cycle of the Programme for International Student Assessment (PISA), for instance as external advisor to the PISA 2012 and 2015 Expert and Subject Matter Expert Groups and as contracting partner at his institution. He serves also as chair of the problem solving expert group for the 2nd cycle of the Programme for the International Assessment of Adult Competencies (PIAAC). In these positions, he has considerably shaped the understanding of transversal skills across several large-scale assessments.