EVAL-UM 2018 Workshop/Hackathon
Towards Comparative Evaluation Metrics and Processes for User Modelling
To be held in conjunction with the 13th European Conference on Technology-Enhanced Learning, EC-TEL 2018, 03-06 September 2018, Leeds, UK
- What is happening?
- A hackathon with a user modelling and analytics based task in the domain of Technology Enhanced Learning
- Student logged data will be analysed based on a set of predefined guidelines and metrics that can be extended
- The actual task description and datasets will be provided to participants at least 2 weeks before the workshop.
- When is it happening?
- In conjunction with EC-TEL 2018
- Leeds, UK, September 3-4, 2018
- What is expected of me?
- Form a team of 2-5 members and register your interest with a half page position statement at (https://easychair.org/conferences/?conf=evalum2018) no later than 8th of July midnight 11:59pm UTC-12, 2018.
- Participate as a group and execute the evaluation task for User Modelling and Data Analytics
Digital personalisation research relies on the use of personalised information for evaluations and testing. For this comparison metrics and general practises need to be established. Specifically in educational related technologies comparative evaluations of systems using personal data is essential to advance the state-of-the-art and the systems themselves. However, currently there is no established or standardised means for comparative evaluation for researchers relying on personalisation and user modelling in TEL. Furthermore, it has been proven that the development of such methodologies is extremely difficult, but highly rewarding. This hackathon-based workshop will develop insights in comparative evaluation from the hands-on experience of teams working through a real world user modelling and analytics challenge. The outcome will be a set of recommendations and guidelines to help the overall community to move towards a more generalised methodology for comparative evaluation
The EVAL-UM workshop has the ambitious goal of moving towards comparative evaluations in the domain of technology enhanced learning (TEL) and user modelling in general. In particular, the workshop aims to initiate discussions within the TEL community and utilise pragmatic exercises that will be used to greatly advance our experience and knowledge on developing effective comparative methodology in the TEL evaluation space. Including not only the technical challenges associated with design and implementation, but also privacy, ethics, legal and security issues, evaluation methodologies and metrics.
Our long-term vision is the establishment of an annual shared challenge series, similar to TREC and CLEF in the information retrieval (IR) space. The establishment of such shared tasks requires that appropriate models, content, metadata, user behaviours, etc. be available, in order to comprehensively compare how different approaches and systems perform. In addition, a number of metrics and observations would need to be outlined, that participants would be expected to perform in order to facilitate comparison.
Towards this auspicious shared challenge series generation goal, the goal of the proposed EVAL-UM hackathon style workshop is to act as a practical first step towards such shared challenge generation by refining metrics and increasing our (and the communities) understanding of how user modelling challenges would run and how we would facilitate teams in participating in challenges.
The EVAL-UM workshop will run as a hackathon where the participants will be given specific user modelling and analytic tasks to perform using a provided eLearning dataset. They will be required to report their approach, process and findings upon common guidelines, evaluation models (categories) and metrics. The output of such pragmatic exercise will be used to test and develop effective comparative evaluations across different approaches and tools in the domain of user modelling and technology enhanced learning. As this area of research is inherently diverse and complex it is expected to combine and benefit technology underpinnings on user modelling and personalisation, learning analytics, machine learning, semantic web and context aware systems, adaptive learning flow and content, student feedback and interventions, recommendations, natural language processing and visualisations.
More specifically, the workshop hackathon will consist of 5 to 6 teams with each team having between 2 to 5 members. The teams will use an eLearning dataset (the AMAS student-logged data) to build a model and analyse student performance based on their engagement patterns. Prior to the workshop, the participants will be given specifications outlining the data structure and the hackathon goals. At the workshop participants will be provided with a formatted, cleaned version of the AMAS dataset, data descriptors, hackathon goals, task guidelines, and phases to complete. To maximize groups breakout development time, the workshop will ideally run over two half days consisting of three sessions. During the first session, the hackathons goals and tasks will be presented. During the second session, which will take up 70% of the workshop time, the participants will build their models and document their approach, processes, evaluation and metrics using provided guidelines. This session will also consist of periodic catch up sessions. The final discussion session will consist of group discussion led by brief presentations by each team describing their approach used and outcomes. With a plan to bring interested parties together post EC-TEL for further development of workshop outcomes toward shared challenge generation.
Call for Participation
The EVAL-UM workshop is now accepting half-page position statements from groups, outlining a groups interest in taking part in the workshop and knowledge/expertise in the space.
Selection uses a single-blind procedure, where all position statement submissions will be reviewed by at least two program committee members, and will be assessed based on their potential to contribute to the workshop hackathon; a meta-review will be provided by the workshop organisers.
Position statements should be submitted in pdf format through the EasyChair system (https://easychair.org/conferences/?conf=evalum2018) no later than 8th of July midnight 11:59pm UTC-12, 2018. The supplied template can be used.
The EVAL-UM workshop will provide
- Prizes are awarded to participating teams
- Funded support for student participation
Post workshop, workshop participants will be invited to extended their position statements with details describing their participation in the workshop hackathon and lessons learned. These papers will be published as a workshop proceedings with CEUR. In addition, based on the workshop outcomes a blueprint will be published with a set of recommendations and guidelines to help the community to move towards a more generalised methodology for comparative evaluation. This blueprint will include an approach for shared challenge generation, necessary processes, evaluation categories and metrics.
Deadline for group position statements: 08-July-2018
Group Notification: 15-July-2018
EVAL_UM workshop at ECTEL: Sept 3rd 2018 (afternoon) and Sept 4th 2018 (morning)
- Owen Conlan, Trinity College Dublin, Ireland
- Athanasios Staikopoulos, Trinity College Dublin, Ireland
- Bilal Yousuf, Trinity College Dublin, Ireland
- Kevin Koidl, Trinity College Dublin, Ireland
- Liadh Kelly, Maynooth University, Ireland
- Alisdair Smithies, University of Leeds, Leeds
- Eelco Herder, L3S Research Center, Hannover, Germany
- Claudia Hauff, Delft University of Technology, The Netherlands
- Judy Kay, University of Sydney, Australia
- Tsvi Kuflik, The University of Haifa, Israel
- Francesco Ricci, University of Rome, Italy
- Stephan Weibelzahl, Private University of Applied Sciences Göttingen, Germany
- Vincent Wade, Trinity College Dublin, Ireland