Draft Preliminary note on identifiability and PII review · click to expand

This appendix is being circulated to the committee as a draft. It contains material that has not yet completed a final review for personally identifiable information. Before the appendix is finalized for submission, every mention of a named individual will be reviewed against the taxonomy in §B.6.5: students named in instructor-produced materials will be anonymized unless explicit written consent for educational use is documented; named guest speakers will be retained as public professional identities with their professional context attached; Luma-platform workshop feedback will be reviewed for anonymization; and external journalism is retained as already published and consented.

§D.4

Continuous Feedback · the reflexive loop in practice

The main document's third curriculum-design principle, continuous feedback, names the property by which surveys, reflections, and learner responses guide course revision across iterations. This section elaborates the principle from the practitioner-pioneer's reflexive position. What I add is the autoethnographic record of how feedback operated at multiple timescales — within a single lesson, across an iteration, and across iterations — and the analytic naming of the most concrete in-the-moment instance as hallucination-as-pedagogy.

D.4.1 Continuous feedback as a multi-timescale phenomenon

The DBR analysis develops continuous feedback as a between-iteration property: surveys and reflections at the end of each iteration inform the next. The autoethnographic supplement adds two further timescales the DBR analysis does not naturally surface:

The DBR analysis of the main document covers the third timescale (across-iteration). The autoethnographic supplement extends to the first two.

D.4.2 What the corpus shows about continuous feedback across timescales

The principle operates across all three timescales in the corpus:

Timescale Evidence
In-the-moment WU-1.W01-Q1 captures my first-week observation of ChatGPT-quiz hallucinations and the immediate reframing as teachable. TR-4.D1-Q3 captures the live workshop passage where I name a hallucination, show the concrete example (no body in the shoes), normalize the phenomenon, and convert the encounter into a re-prompting practice.
Within-iteration The fifteen-week Weekly Updates Prelim Document (WU-1.W01 through WU-1.W15) is the most complete documentation of weekly feedback shaping next-week instruction in any iteration. The "Learned" sections are first-person reflective data captured at the time of teaching.
Across-iteration The Iteration 1-to-Iteration 2 tool turnover (DeepSeek added in Iter 2 Week 3 within weeks of public release; AI Agents as a new full module; structural reshuffles documented in §C.6) is the principal cross-iteration feedback evidence. The Iteration 3-to-Iteration 4 compression also operates at this timescale (workshop deck DK-4 preserved the DK-3 template with date headers updated).

D.4.3 The sub-claim · hallucination-as-pedagogy

The autoethnographic analysis surfaces a pattern within continuous feedback at the in-the-moment timescale that I name hallucination-as-pedagogy. The pattern is that generative-AI hallucination, dominantly framed in the technical literature as a system limitation, is also a productive pedagogical phenomenon: the instructor's moment-to-moment encounter with hallucination becomes a teachable opportunity to expose the limits of the technology, invite human verification as a counter-practice, and open space for ethics and accessibility conversations.

The label "hallucination-as-pedagogy" is my analytic coinage in this dissertation. The corpus contains the phenomenon — me reframing the tool's hallucinations as teachable moments across multiple registers from January 2024 through September 2025 — but the label itself emerges here, in the analytic work, not in the iterations' own materials.

D.4.4 Source 1 · Instructor reflection in Iteration 1 Week 1

My first encounter with hallucination as a teachable moment is recorded in the Weekly Updates Prelim Document for Iteration 1 Week 1 (WU-1.W01). I wrote in the "Learned" entry for that week (WU-1.W01-Q1, verbatim):

"I learned that some of the multiple choice quizzes generated by ChatGPT were not correct and had hallucinations."

The note is brief and dated to the very first week of the very first iteration. It captures what I noticed at the time: that the tool I had given students to generate quiz material had produced factual errors, and that the errors were not a deal-breaker for the pedagogy but were themselves something to teach about. The persistence of hallucination as a topic across subsequent iterations (sources 2 through 4 below) indicates that the teachable-moment reframing took hold from this first observation forward.

D.4.5 Source 2 · K-12 children's observation at UW KidsTeam (co-design context)

Six months after Iteration 1 closed, in July 2024, I joined the University of Washington KidsTeam group and Youth Advisory Board on co-design sessions about generative AI in schools. The children and teens were not coached toward any framework I had developed, but I want to characterize the corroboration carefully: I was a participant in the co-design sessions, not an external observer. The children's themes were elicited by the KidsTeam methodology (which the UW team led, not me) and were articulated in their own terms, but the co-design context was collaborative, not independent of my presence in the room.

What the children's articulation provides, then, is corroboration that is near-independent of my own curriculum: the methodology was UW's, the children were not my students, and the analysis was the UW team's. What it is not is fully independent observation. With that caveat noted, the KidsTeam research themes document (KT-THEMES) records the children's challenges with generative AI in their own catalogued form. The hallucination challenge appears as the fifth named challenge (KT-THEMES-C5, verbatim):

"Hallucinations (such as images produced with a third arm)"

The same document records a broader pattern of children's challenges with the tools alongside the hallucination observation: "Generative AI being banned from schools," "Cheating," "AI is data fed into a computer and doesn't yet know everything," and "Lack of emotion and human connection" (KT-THEMES-C1 through KT-THEMES-C4 respectively). The hallucination observation is therefore not an isolated complaint; it sits within a structured list of named challenges the children produced through the UW KidsTeam co-design protocol.

The convergence with my own observation is the strongest single piece of near-independent evidence in the corpus.

D.4.6 Source 3 · Public-facing reframing in Keep Up Newsletter Episode 1

In April 2025, in the first episode of my Keep Up Newsletter on LinkedIn (KN-EP1), I addressed hallucination publicly as part of the "Lessons from Training" section of the episode's running-and-training metaphor (KN-EP1-Q1, verbatim):

"Expect variable results, occasional hallucinations; persistence improves prompting skills; join community groups"

The phrase makes the pedagogical reframing explicit, and the surrounding section makes the reframing structural. Hallucinations are not a fault that should make the reader abandon the tools; they are expected behavior alongside variable results, and the appropriate response is persistence in prompting and engagement with community knowledge. The newsletter audience is a general LinkedIn readership, not a classroom; the reframing is therefore not classroom-specific but a posture I have generalized from the classroom and offered as public guidance.

D.4.7 Source 4 · Live workshop delivery in Iteration 4 Day 1

In September 2025, in the first day of the Iteration 4 GenAI Works workshop, hallucination appears in the live teaching delivery (TR-4.D1-Q3, verbatim):

"And you can also see these different hallucinations that are going on. So there aren't even bodies in these shoes. And so it's really interesting how these different tools are used. Sometimes I get some really really great image generation and it just depends on what you want to use it for, but other times there are a lot of hallucinations that you can get within these specific tools. And so once again, you just have to make sure and take a look at everything. And even if you do get a hallucination on the first time, it doesn't mean give up. You can keep inputting and inputting and inputting the prompts into into the generator and see if it changes. And so I would I would continue to keep working with the specific tool and if you're still not getting a result that you like, then try a different tool."

The passage stages four moves in roughly forty seconds of teaching: naming (the technical term "hallucination"), showing (the concrete shoes-without-bodies example), normalizing (there are "a lot of hallucinations" learners will encounter), and iterating (re-prompt, keep prompting, switch tools). The four moves together are the operational form of hallucination-as-pedagogy. The passage is also the most recent of the four sources for this elaboration; by Iteration 4, the posture was a stabilized component of my workshop curriculum.

D.4.8 The convergence across sources

The four sources span twenty months (January 2024 through September 2025) and four artifact types. Three of the four sources (WU-1.W01-Q1, KN-EP1-Q1, TR-4.D1-Q3) are surfacings of the phenomenon by me, at different points in my practice and across different registers: contemporaneous reflective journaling, public-facing instructional writing, and live workshop delivery. They document the stability of my instructor reframing over time, across registers, but they are not independent observations. The fourth source (KT-THEMES-C5) is the children's own observation in a UW KidsTeam co-design context, and is the corroboration nearest to independence.

What the convergence supports is two claims at different strengths. The stronger claim is that my own framing of hallucination as a teachable phenomenon stabilized across my practice's registers, from private reflection to public writing to live workshop. The weaker claim, supported by one near-independent observation, is that children in adjacent contexts surface the same phenomenon as a concern. Both claims are part of the elaboration.

D.4.9 Theoretical contribution

The technical literature on generative AI predominantly treats hallucination as a system property to be measured and reduced. Bender, Gebru, McMillan-Major, and Shmitchell (2021) provide the canonical formulation in "On the dangers of stochastic parrots," which frames hallucination among the broader harms of large language models that the field should design against. The technical lineage is appropriate to its goals: building better systems.

The pedagogical literature on how to teach with generative-AI tools has not yet developed a corresponding framework for hallucination as a teaching resource. The empirical question this elaboration addresses is not whether hallucination is a system property (it is) or whether system properties should be reduced (they should). The question is what an instructor does with hallucination when it appears in front of learners.

The elaboration contributes the claim that hallucination, encountered in instructional settings, is most productively framed as a teachable phenomenon that exposes the limits of the technology, invites human verification as a counter-practice, and opens space for ethics and accessibility conversations. The contribution is not a curriculum module or a teaching technique; it is a posture an instructor can adopt and that learners across age groups appear to find legible. The claim sits as a sub-claim within the continuous-feedback principle the main document develops.

D.4.10 Guidelines for educators teaching with generative-AI tools

The four moves I stage in TR-4.D1-Q3 (name, show, normalize, iterate) can be transposed by other instructors. Five guidelines, drawn from the cross-source convergence: