Collaborative rubric design: Evaluating AI platforms for academic research
The AMICAL Information Literacy Initiatives Committee (ILIC) is happy to announce the upcoming hands-on workshop, “Collaborative Rubric Design: Evaluating AI Platforms for Academic Research”.
In this session, participants will start working with the rubric provided below to evaluate AI platforms, refining and enhancing it as they go. The goal of the webinar is to create a well-developed rubric that faculty members and librarians can incorporate into AI instruction, in order to foster discussions with students that promote critical and informed use of AI in academic contexts.
The workshop starts with a “Rubric Criteria Review” breakouts, where groups of 4–5 participants refine the criteria of the rubric by clarifying the language, adding missing elements, and/or noting impractical parts. In the second round of breakouts, the “Scenario Testing”, groups apply the revised rubric to an instruction context to assess how well the rubric criteria support decision-making and reveal remaining gaps.
Participants will leave with a rubric (or a set of questions) to evaluate AI platforms for Academic Research.
Who should attend? Librarians, faculty, and instructional designers.
Before the webinar:
- Rubric that will be used as a working document during the workshop
- If you are interested in general rubric design, here is a good read.
The webinar will be led by:
- Kate Ruprecht
- Stavros Hadjisolomou
- Rita El-Haddad
- Michael Stoepel
AI use notice: We used ChatGPT for editing clarity. Perplexity Pro was used to format the rubric for evaluating the AI platforms.