Key Takeaways
- Assemblyman Andrew Macurdy (D‑Union) plans a three‑bill “common sense” package to regulate AI‑generated media in New Jersey.
- The bills would (1) require clear, conspicuous labeling of AI‑created photos, video and audio; (2) ban unauthorized use of a person’s likeness in generative AI; and (3) impose civil penalties on developers whose AI tools facilitate certain crimes.
- A Stockton Poll shows growing public anxiety: 41 % of registered voters believe AI will worsen their lives, and 56 % support banning data centers in their towns.
- New Jersey currently has only three AI‑related laws; the proposed package would make it one of the first states to mandate explicit AI‑content disclosures comparable to California’s forthcoming rules.
- If enacted, the legislation would give individuals a right to sue for non‑consensual AI‑generated depictions and empower the attorney general’s office to fine developers $20,000 per violation for criminal misuse of AI.
Assemblyman Macurdy’s Motivation and Legislative Vision
Assemblyman Andrew Macurdy told NJ Spotlight News, “I think we need to get ahead of it,” reflecting his belief that proactive state regulation can set a national precedent. He described his upcoming trio of bills as a “common sense” response to the surge of AI‑generated content on social media. Macurdy argues that without clear guardrails, the line between reality and fabrication will continue to blur, eroding public trust.
The AI Image Disclosure Act: Labeling Synthetic Media
The first bill, dubbed the AI Image Disclosure Act, would go beyond California’s upcoming requirement that AI‑produced audio and visual media contain an embedded origin‑timestamp. Macurdy’s version mandates a “clear, conspicuous, appropriate for the medium of the content and understandable to a reasonable person” disclosure that the material was created using AI. Moreover, social‑media platforms would be obligated to generate explicit disclaimers based on that embedded information, ensuring viewers see the label even if they do not inspect the file’s metadata.
Protecting Personal Likeness: The AI Likeness Protection Act
The second proposal, the AI Likeness Protection Act, would prohibit the distribution of any realistic AI‑generated representation of a person—whether via text, image, video, or audio—without that individual’s explicit consent. Violators could be sued civilly by the depicted person. Macurdy warned, “There’s just going to be content out there of you, whether you’re a public figure or not, doing things that you didn’t do,” emphasizing the privacy and reputational risks posed by deep‑fakes and synthetic media. Several states already recognize a right to one’s voice and likeness in AI contexts; Macurdy’s bill would codify that protection in New Jersey.
Holding Developers Accountable: The AI Accountability Act
The third bill, the AI Accountability Act, targets the developers of AI platforms that facilitate criminal activity. It would impose civil penalties of $20,000 per violation for uses of AI in extortion, theft by deception, or the creation of child sex‑abuse imagery. Enforcement would fall to the state attorney general’s office. Macurdy characterized this as a “fair burden” on developers, arguing that those who build powerful generative tools should bear responsibility for preventing their misuse.
Public Sentiment and the Polling Data
A recent Stockton Poll underscores the urgency Macurdy feels. The survey found that 41 % of New Jersey registered voters believe increased AI use will make their lives worse—up from 36 % two years earlier—while only a little more than a quarter think AI will improve their lives. Moreover, 56 % of respondents would support a ban on data centers in their municipalities, and nearly half said such centers do more harm than good. These figures illustrate a growing apprehension not only about AI’s societal impact but also about the infrastructure required to power it.
New Jersey’s Current AI Legislative Landscape
Presently, New Jersey has enacted only three AI‑related statutes, the most recent of which criminalizes deepfakes used for harassment, extortion, or other unlawful purposes. By contrast, the Orrick U.S. AI Law Tracker shows that states have passed 224 AI laws, with California leading at 29. California’s forthcoming rules will be the first to require both an embedded disclosure of AI origin and a free, publicly accessible tool for users to verify whether content is AI‑generated. Utah and New York have similar transparency mandates, though New York’s applies narrowly to commercial advertisements.
How Macurdy’s Package Compares to Existing State Laws
A review by NJ Spotlight News revealed that only New York currently mandates a clear disclosure that AI was used in commercial ads. Macurdy’s AI Image Disclosure Act would expand that requirement to all forms of media and impose an additional, visible label on platforms—something no other state has yet implemented. His likeness and accountability bills also break new ground by creating private rights of action for individuals and imposing fines on developers for criminal misuse, measures that remain uncommon across the United States.
Legislative Path Forward and Potential Impact
Macurdy, a member of the Assembly Science, Innovation and Technology Committee, notes that a seven‑bill AI regulatory package recently advanced in that committee but awaits full‑Assembly action. His three bills would join that effort, aiming to position New Jersey as a leader in responsible AI governance. If passed, the legislation could influence other states seeking balanced approaches that protect consumers, preserve personal rights, and deter illicit uses of emerging technology—while still fostering innovation.
Quoted statements are drawn directly from the original NJ Spotlight News article and are presented here to preserve the lawmaker’s voice and the poll’s findings as a journalist would.

