Contributions
This section names the contributions the dissertation makes to scholarship, organized around the three curriculum-design principles and their autoethnographic elaborations. The contributions are of three kinds: methodological, substantive, and pedagogical. I take each in turn, name its claim, locate its evidentiary base, and indicate the literatures to which it speaks. I close with limitations and future directions.
D.5.1 Methodological contribution · supplementing design-based research with analytic autoethnography
My methodological contribution is the demonstration that analytic autoethnography (Anderson 2006) can supplement design-based research at a generative-AI pedagogy site, deepening each of the three curriculum-design principles the main document develops by surfacing the practitioner-pioneer's reflexive account of how the principles emerged from her practice and what they reveal at a closer analytic level.
The methodological contribution is the supplementation move itself, not either methodology in isolation. DBR is a mature methodology; analytic autoethnography is a twenty-year-old framework; neither is novel. What is new in the engineering-education and HCI literatures is the combined deployment of the two at a fast-moving emerging-technology pedagogy site, with the autoethnographic supplement reading the same artifact corpus that the DBR primary analysis reads but surfacing a different layer of analytic detail within the same principles.
Mapped to Wobbrock and Kientz's (2016) taxonomy of research contributions in HCI, the methodological contribution is what they call an opportunistic contribution (deploying an established methodological supplementation in a new configuration), folded together with a survey contribution (the per-iteration record itself constitutes a survey of how a pioneer instructor's practice unfolds at the GenAI moment). It is not an empirical contribution in the controlled-experiment sense; the dissertation does not claim that pattern.
D.5.2 Substantive contribution · the three principles enriched by autoethnographic elaboration
My substantive contribution is the three curriculum-design principles modularity, learner choice, and continuous feedback as the main document develops them, enriched by three autoethnographic elaborations that this appendix surfaces. The principles are the practitioner-facing outputs the field can adopt; the elaborations are the scholar-facing outputs that explain how the principles emerged and what they reveal at a closer analytic level.
| Principle | Autoethnographic elaboration (sub-claim) |
|---|---|
| Modularity | The four-theme architecture's documented stability under tool turnover and format compression. Compression-as-curriculum-maturation as the analytic name for the pattern (§D.2). |
| Learner Choice | Dialogue with informants across eight contexts of varying duration, audience, and depth. Multi-channel teaching practice as the analytic name for the pattern (§D.3). |
| Continuous Feedback | The reflexive feedback loop the instructor maintained with tools and learners. Hallucination-as-pedagogy as the analytic name for the most concrete in-the-moment instance (§D.4). |
The substantive contribution is neither the principles alone (DBR could surface them) nor the elaborations alone (autoethnography could surface them). It is the two layers working together: principles that adopters can use and elaborations that explain to scholars what the principles' adoption requires of the practitioner.
D.5.3 Pedagogical contribution · the four-theme architecture's documented stability
My pedagogical contribution is the empirical record of how the four-theme curriculum architecture (Education, Industry, Ethics, Accessibility) functions as a stable organizing structure under tool turnover, audience shift, and delivery-format compression. The contribution is the stability record, not the category names (§A.4.4 acknowledges that adjacent AI-literacy frameworks use overlapping category names; the four-theme list itself is not novel).
The stability is documented along three dimensions:
- Across iterations and tool turnover. The architecture held across four iterations spanning two and a half years and at least three generations of underlying tools (image generation from DALL-E to Midjourney to Nano Banana; large language models from ChatGPT to DeepSeek to Claude 3.7; new capabilities including AI Agents and accessibility-targeted tools like Be My AI).
- Across delivery contexts. The architecture held across eight contexts (§D.3) from undergraduate semester courses to online workshops to K-12 outreach to federal-research webinars.
- Across audience age groups. The architecture held against learners from elementary-school age through graduate students and professional federal-research audiences, with cross-channel resonance documented in §A.5.
The architecture is testable. A subsequent instructor of an emerging-technology curriculum could adopt this architecture, deliver under it, and report whether it holds against their cohort, their tools, and their institutional context. The architecture is small enough to remember (four themes), explicit enough to operationalize (each theme has a named scope; ethics is positioned cross-cuttingly), and flexible enough to accommodate substantial tool turnover (my own corpus documents this).
The pedagogical contribution speaks to the engineering-education and HCI pedagogy literatures and to the broader AI-literacy literature emerging in 2024 through 2026. It is not the only valid framework for generative-AI pedagogy. It is a framework whose architectural stability has been empirically documented across the configurations named above.
D.5.4 Consolidated guidelines for educators teaching generative AI
Anderson's (2006) framework for analytic autoethnography licenses pedagogical-implications work as a legitimate genre of contribution: the insider's analytic position is supposed to produce insight that travels to adjacent practitioners. The guidelines below are that kind of contribution. They are not the autoethnographic elaborations themselves (§D.2 through §D.4); they are the educator-facing distillation of what those elaborations suggest for practice, mapped to the three principles. They are offered with the appropriate hedging: the single-case scope of this dissertation (§D.5.5) means the guidelines are starting points for adopters in adjacent settings rather than empirical prescriptions.
Across the three principles, five top-line guidelines emerge for educators planning, delivering, or sustaining a generative-AI curriculum:
- Build the curriculum around a small set of stable themes that can absorb tool turnover. This is the modularity guideline: small enough to remember, explicit enough to operationalize, flexible enough to accommodate the next tool the field releases. The four themes that worked here (Education, Industry, Ethics, Accessibility) are one starting set; other configurations are plausible.
- Iterate at full length before compressing. This is the modularity-and-maturation guideline: the five-day workshop format is the outcome of two semester-length passes, not the starting point. Pioneer instructors compressing to a workshop without prior full-length delivery will likely omit what the longer iterations would have surfaced as essential.
- Run the curriculum across multiple contexts deliberately. This is the learner-choice guideline: K-12 outreach, public-facing writing, podcasts, and federal-research webinars are not extracurricular. They are where the architecture is tested against audiences beyond enrolled students, and where cross-channel resonance becomes evidence that the architecture holds.
- Credit students publicly when their tool discoveries shape the curriculum, and document the flow. This is also a learner-choice guideline: the Ethan Cuenca to Soundful case (§D.3.5) is one named instance. The student-to-instructor tool flow is part of pioneer practice and warrants documentation as such.
- Treat hallucination, error, and surprise as the teachable centerpieces, not the failures to hide. This is the continuous-feedback guideline: what the technical literature frames as system limitations are what an instructor in front of learners can convert into the most legible moments of the curriculum. Name the phenomenon, show it, normalize it, convert it into a re-prompting practice.
These guidelines are calibrated to pioneer instructor practice in fast-moving technology fields. Whether they travel to other domains (mature curricula, slower-moving fields, multi-instructor programs) is an empirical question for subsequent scholarship, addressed in §D.5.7 Future directions.
D.5.5 Limitations
I name three limitations of the contributions claimed above.
Single-instructor, single-program scope. My dissertation documents one instructor's pioneering practice at one research university. The principles, the autoethnographic elaborations, and the guidelines are claims about pioneer practice at the generative-AI site between 2023 and 2025. They do not generalize automatically to other pioneers, other sites, or later periods when the field has matured beyond pioneer-entrant status. Whether they travel is an empirical question for subsequent scholarship.
Asymmetric reflective data across iterations. My contemporaneous structured reflective journaling is concentrated in Iteration 1 (WU-1.W01 through WU-1.W15). Iterations 2, 3, and 4 draw on my public-facing reflective writing (KN-EP series across the 2025 iterations, KP-EP series in May 2025, WB-2026-03-03 in March 2026) as the supplementary reflective base, within an acknowledged retrospective-public frame that analytic autoethnography permits (§B.6.3). The reflective base for the later iterations is therefore narrower than for Iteration 1 but not absent.
Researcher's investment in the work. I have a stake in the work succeeding and in the autoethnographic elaborations landing as contributions. Analytic autoethnography does not require neutrality, and §B.7.4 names my investment explicitly. The trustworthiness moves I make in response (multi-source anchoring of each elaboration, named limits of the data, dialogue with chair and committee) are described in §B.6 and §B.7. They do not eliminate the investment; they discipline it.
D.5.6 Why this matters now
The dissertation's timing is part of its contribution. The four iterations span the post-ChatGPT period when generative-AI pedagogy emerged as a field, and the dissertation is, in its specific autoethnographic configuration, one of the empirical records of what the emergence looked like from inside an instructor's practice. Three reasons this matters now, beyond the substantive contributions named in §D.5.2.
First, the empirical record of what an instructor learned in 2024 and 2025 will be retrospectively important to AI-pedagogy scholarship as the field matures. Reconstructions written years later will not have access to contemporaneous artifacts of the kind preserved here: the Weekly Updates Prelim Document captured in real time as the tools were released; the workshop transcripts capturing teaching delivery while the audience-engagement patterns were still novel; the public-facing newsletter and podcast written as the instructor was still figuring out what to teach. The historical record value of this corpus increases with time.
Second, the methodological framing offers an alternative to the controlled-experiment paradigm that dominates engineering-education research on emerging-technology pedagogy. Pioneer instructors of fast-moving fields cannot, as a practical matter, run the controlled studies the dominant paradigm asks for: the technology changes faster than the IRB approves, the comparison condition does not exist (there is no established curriculum to compare against), and the timeline of the research mismatches the timeline of practical relevance. Analytic autoethnography of the kind this dissertation performs is one way the field can do legible scholarship at the timescale the technology demands.
Third, the educator-facing guidelines (§D.5.4) are time-sensitive contributions. Educators teaching generative AI in 2025 and 2026 face design choices the field has not yet stabilized answers to: how to handle hallucination in real time, whether to compress to workshop format or hold semester length, whether to extend across multiple contexts. The guidelines drawn from this dissertation are oriented to those choices and travel best while the choices are still open.
D.5.7 Future directions
I name three directions for subsequent scholarship that the dissertation opens.
Comparative autoethnographies of pioneering instructors. Subsequent analytic autoethnographies of other early-entrant generative-AI instructors at other institutions would test whether the three autoethnographic elaborations I claim travel beyond my case. A small comparative collection of pioneer-instructor analytic autoethnographies would also begin to build the empirical record that the field currently lacks.
Longitudinal tracking of students into practice. My corpus documents student-to-instructor flow at a specific scale (Ethan Cuenca and Soundful, the named case in §D.3.5) and student-public engagement at a specific scale (Ashley Stafford quoted in Aspen Public Radio, AP-2024-05-16). Longer-term tracking of students from pioneer GenAI courses into industry, education, or research roles would extend the documentation of how pioneering instruction propagates.
Curriculum architecture portability studies. The four-theme architecture is testable against subsequent instructor cohorts. A study in which two or three subsequent instructors of generative-AI courses adopt the architecture, deliver under it, and report on its hold against their cohorts would empirically test portability (the next claim past the stability record this dissertation documents in §D.5.3).
These directions are not promised by my dissertation. They are openings the dissertation creates for scholarship that follows it.