Unexpected Findings
The study revealed several surprises and unexpected findings that helped me understand how an adaptive GenAI course actually works. One unexpected finding was that some activity designs that worked well at small scale required substantial redesign at large scale. In the first two university offerings, long in-person labs and detailed peer critiques produced extensive in-class artifact-review activity. When the course grew to 100 and then 1,000 global participants, those same activity designs became difficult to manage at scale because of time limits and platform constraints. I redesigned peer review to use clearer prompts, simple rubrics, and fewer artifacts per person, and I leaned more on asynchronous discussions and short, focused feedback. This change showed that as the cohort size grows, some depth has to be balanced with more structure and breadth in the curriculum design, even when the core goals of collaboration and reflection stay the same.
Another surprise involved how particular tools rose and fell in importance over time. At several points, tools that I expected to be central lost their place because of pricing changes, access restrictions, or interface redesigns that made them less feasible for students with limited resources or time. At the same time, learners frequently brought new tools to class during "teach out" sessions and discussions. Some of these student‑introduced tools quickly became important examples in assignments and demos, especially when they offered more accessible interfaces, free tiers, or features that better fit project goals. This pattern highlighted how student co‑discovery was not just a nice extra, but a practical mechanism for keeping the course current with the GenAI landscape. It also reminded me that my own preferences and professional habits could not fully predict which tools would matter most to learners.
These surprises reinforce the framing of GenAI as a black swan technology and support the decision to use curriculum‑as‑research and iterative teaching‑as‑inquiry as the core approach. Rather than searching for a single, stable "best" design, the course needed to be flexible enough to absorb surprises, let go of tools or practices that no longer served learners, and incorporate new possibilities surfaced by students. At the same time, these tensions made my positionality as an instructor‑researcher more visible. My excitement about certain tools, my background in art and engineering, and my assumptions about what "good" GenAI work looks like all shaped early choices and interpretations. Reflexive notes, peer feedback, and student input helped me notice these influences and adjust, but they did not remove them. In this way, the unexpected findings do not just say something about the course; they also say something about what it means to study a disruptive technology from the inside, while actively teaching and designing with it.