Key Takeaways
- A Palo Alto parent, Takashi Kato, has filed a federal lawsuit alleging that his son was wrongly accused of using AI to write an English essay, forced to retake the assignment in‑person, and consequently received a lower grade.
- The suit claims the teacher relied solely on Turnitin’s AI‑detection score (76% flagged) without conducting an educator‑driven review or giving the student a meaningful chance to rebut the allegation.
- Kato argues the procedure was discriminatory against his multilingual Asian son, violated due‑process rights, and lacked a formal district policy on AI use, placing teachers in a legal gray area.
- The family submitted extensive evidence—drafts, notes, and revision histories—to refute the AI claim and demanded a neutral B grade, expungement of the accusation, and an end to the district’s in‑person retake practice.
- The lawsuit cites academic research questioning the reliability of AI detectors, especially for non‑native English writers, and notes that many prestigious universities have disabled Turnitin’s AI‑writing service over fairness concerns.
- As of the filing date, the Palo Alto Unified School District has not responded; a case management conference is set for August 6, 2025.
Background of the Dispute
Palo Alto Unified School District (PAUSD) has been experimenting with artificial intelligence in classrooms, sending administrators to AI workshops to explore instructional applications. Yet, despite this forward‑looking stance, the district has not adopted a comprehensive policy governing AI use or AI‑detection tools, leaving individual teachers to develop their own approaches. This policy vacuum created the context for the conflict that led to the lawsuit, as educators were left to interpret and enforce AI‑related academic standards without clear guidance.
The Essay Submission and Turnitin Flag
In October 2025, Takashi Kato’s sophomore son submitted an essay on Arthur Miller’s The Crucible for his Palo Alto High School English class. Two weeks later, the essay was run through Turnitin, the district’s AI‑detection software. According to the lawsuit, teacher Sarah Bartlett reported that Turnitin flagged 76 % of the essay as AI‑generated or AI‑influenced. Bartlett also asserted that the student had admitted to using Grammarly for synonym searches—a claim Kato later contested as false. The teacher’s classroom policy, described as non‑punitive, allowed students to retake the assignment in person if AI use was suspected.
The Retake and Resulting Grade
Following the Turnitin alert, the student was required to rewrite the essay during class time. He earned a D on the rewrite, which pulled his overall course grade down to a C. Kato contends that this outcome was punitive, arbitrary, and not grounded in any formal academic procedure. He argues that the district’s lack of a standardized grading protocol for AI‑related accusations rendered the process subjective and unfair.
Evidence Submitted by the Family
To challenge the AI accusation, Kato compiled nearly 1,200 pages of documentation, including essay drafts, handwritten notes, and a full revision history of the digital document. He presented this evidence to school officials, insisting it demonstrated the student’s original work and refuted the claim of AI assistance. After several exchanges, the family issued an ultimatum: provide the student a neutral B grade by March 6, 2025, or face legal action. The district did not comply.
Allegations of Discrimination and Retaliation
Kato asserts that his son was targeted because he is a multilingual Asian male, suggesting a pattern of bias. He references academic studies and university policies that have questioned the reliability of AI detectors, particularly highlighting bias against non‑native English writers. The lawsuit notes that many prestigious institutions have disabled Turnitin’s AI‑writing service due to fairness concerns. Kato also claims that after lodging the complaint, teachers retaliated against his son, further harming the student’s academic standing.
Legal Claims and Requested Relief
The lawsuit advances multiple counts: discrimination, retaliation, denial of due process, and improper grading policies. Kato seeks several forms of relief:
- Restoration of his son’s grade to a B;
- Removal of the AI‑use allegation from the student’s record;
- An in‑depth, impartial grading evaluation of the retake essay;
- Cancellation of the district’s in‑person retake practice for AI‑suspected work;
- Compensation for emotional and procedural harms caused by the burden‑shifting, delays, and reliance on the unverified Grammarly claim.
He warns that the incident could jeopardize his son’s college admissions prospects, amplifying the stakes of the case.
District Response and Procedural Timeline
As of the lawsuit’s filing, the Palo Alto Unified School District had not submitted a formal response to the complaint and did not immediately reply to a request for comment. Court documents indicate that a case management conference is scheduled for August 6, 2025, at which point the parties will likely discuss discovery, potential settlement, or motions to dismiss. The outcome may set a precedent for how schools handle AI‑detection tools and the procedural safeguards required when allegations of AI‑assisted cheating arise.
Quoted excerpts from the lawsuit, as presented in the original article:
- “The Turnitin tool’s output was treated as dispositive without educator‑driven evaluation or a meaningful opportunity for the student to respond before sanctions were imposed.”
- “Academic studies and university policies have questioned the reliability of AI detectors and identified bias against non‑native English writers.”
- “Many prestigious universities have turned off the tool due to its fairness concerns.”
- “The family incurred emotional and process harm from burden-shifting, delay, minimization of chronology, and reliance on the unproven Grammarly assertion.”
These statements underscore the family’s core arguments: that reliance on an automated score alone violates fair academic practice, that the tool may disadvantage multilingual learners, and that the district’s lack of policy exacerbates the problem.
Conclusion
The case highlights a growing tension between educational institutions’ enthusiasm for AI technologies and the need for transparent, equitable policies governing their use. Without clear guidelines, teachers may resort to ad‑hoc measures—such as mandatory in‑person rewrites—that can lead to disputes over fairness, discrimination, and due process. The forthcoming court proceedings will likely illuminate whether districts must adopt stricter safeguards around AI detection tools and how schools can balance innovation with the protection of student rights.
Parent sues Palo Alto Unified after son is accused of using AI on essay

