§2.6

Curriculum Artifacts and Course Evaluation Inputs

To address the research questions, this study analyzed two kinds of materials: the curriculum artifacts I authored as the instructor across four iterations, and the course evaluation inputs that informed my decisions to revise those artifacts. The primary data are the artifacts (§2.6.1). Course evaluation inputs (§2.6.2 through §2.6.4) functioned as the contextual trigger for revision and are described here to make the iterative cycle auditable. §2.6.5 names the human-subjects scope of the work explicitly.

2.6.1 Curriculum Artifacts (Primary Data)

The primary data for this study are the instructor-authored curriculum artifacts: syllabi, slide decks, module structures, weekly lesson plans, in-class activities, and assignment designs across the four iterations. Each artifact carries dated revision history that traces what was kept, what was rewritten, what was reordered, and what was swapped in or out from one iteration to the next. Slide deck and module revisions are the densest record: across the four iterations I produced revised versions of every weekly lesson, recording in each version what tool was being taught, what example was being used, and how the assignment was framed. These revisions provide the artifact trail that supports the analysis for RQ1 (principles and practices that emerged through iterative design) and RQ2 (contextual and technological adaptation).

Alongside slide decks and modules, I also analyzed the broader instructor-authored corpus: the Iteration 1 Final Project assignment, the Iteration 2 syllabus, the GenAI in Five workshop decks for the global cohorts, and the cross-iteration reflective writing I authored (the Keep Up Newsletter and Keep Up Podcast series). All of these are my own intellectual products, and analyzing them does not constitute research on human subjects.

2.6.2 Course Evaluation Inputs · Feedback Surveys

Across the for-credit and global iterations, students completed short post-module feedback surveys that asked them to rate clarity of instructions, relevance of examples, perceived difficulty, and overall satisfaction (Daniel, 2014). Open-ended fields invited suggestions for improvement. These surveys were collected under my normal teaching authority as the instructor of record and were used as routine course evaluation: I read them between modules, identified clarity gaps and tool-access frustrations, and revised the next iteration of the slide deck or assignment to address what came up. In this dissertation, the surveys are described as evaluation inputs that triggered specific curriculum revisions; the surveys themselves are not analyzed as research data about students.

2.6.3 Course Evaluation Inputs · Learner-Produced Artifacts

Student work — prompt-engineering submissions, multimodal GenAI projects, peer feedback, and end-of-iteration assignments — was reviewed across cohorts as part of my normal teaching practice. Reviewing learner work let me identify where the assignment prompts were unclear, where the scaffolding was thin, and where new GenAI tools were beginning to be discovered. These insights informed the next round of assignment-design revisions and the next slide-deck update. As with the surveys, the learner artifacts are treated here as evaluation inputs rather than as research data about learners. Where individual student outputs appear as figures in this dissertation, they are presented as instructional examples of the assignment, with the student's consent for educational use.

2.6.4 Course Evaluation Inputs · Written Reflections

Students completed short written reflections after specific assignments, particularly the video and sound modules. The reflection prompts asked what they had learned, whether the GenAI tools were useful in their fields, what challenges they encountered, and what ethical concerns surfaced. The length and topics were specific to each assignment. I read these reflections as part of standard end-of-assignment review and used the patterns I noticed to refine the next iteration's pacing and emphasis. As with the surveys and the work artifacts, the reflections are treated as evaluation inputs that surfaced curriculum-design issues, not as research data about students. Any quoted reflective text in this dissertation appears anonymized.

2.6.5 Human Subjects Scope

This study is curriculum-design research analyzing instructor-authored materials. The unit of analysis is the curriculum — the slide decks, modules, syllabi, and assignment designs that I built and revised across four iterations of the Introduction to GenAI course — not the students who took the course. Course feedback surveys, written reflections, and learner-produced artifacts referenced above were collected under my normal teaching authority for the purpose of improving the course and are treated here as program-evaluation inputs that informed curriculum revisions. This stance is consistent with the scholarship of teaching and learning tradition (Boyer, 1990; Hutchings and Shulman, 1999), with practitioner research conducted from inside the instructor's role (Cochran-Smith and Lytle, 2009), and with the design-based-research framing developed in §2.4 (McKenney and Reeves, 2018).

As the instructor of record, I held student records (rosters, submissions, survey responses) for course-administration purposes. Those records remained under my teaching authority and were not redistributed; nothing identifiable about any individual student is publicly released in this dissertation. Quoted feedback appears anonymized, with no name, cohort identifier, or other detail that would allow re-identification. Any student-produced image that appears in a slide-deck figure is included as an instructional example with the student's consent for educational use. Handling of student records throughout has been consistent with the Family Educational Rights and Privacy Act (FERPA, 20 U.S.C. § 1232g; 34 CFR Part 99) and the University of Colorado Boulder's policies on the protection of student records.

Because the analytic target is the instructor-authored curriculum and the analytic claims do not rest on individual student data, and because the student-facing inputs collected during the iterations served the course's own improvement under standard teaching authority rather than a research protocol designed to generate generalizable knowledge about students, this work falls outside the scope of human-subjects research as defined by the federal Common Rule (45 CFR 46.102(l)), which limits the term research to systematic investigation designed to develop or contribute to generalizable knowledge about human subjects. Course evaluation conducted by an instructor to improve her own course falls outside that scope, a distinction the OHRP (2014) makes explicit in its guidance on quality improvement versus research.