Key Takeaways
- OpenAI introduced Advanced Account Security, an optional high‑protection tier for ChatGPT and Codex accounts.
- The feature replaces traditional passwords with two physical security keys or passkeys and disables email/SMS‑based account recovery.
- Recovery must be performed using recovery keys, backup passkeys, or hardware keys; OpenAI support cannot assist with recovery, thwarting social‑engineering attacks on support channels.
- Sessions are shortened, login attempts trigger real‑time alerts viewable from a dashboard, and the opt‑out for model‑training use of conversations is enabled by default for protected accounts.
- OpenAI has partnered with Yubico to offer discounted YubiKey bundles for users who enable the feature.
- Members of the Trusted Access for Cyber program must enable Advanced Account Security by June 1 or provide an equivalent enterprise SSO attestation.
- The move mirrors industry practices such as Google’s Advanced Protection program and is part of OpenAI’s broader cybersecurity strategy announced earlier this month.
Introduction to Advanced Account Security
On Thursday, OpenAI announced the rollout of Advanced Account Security (AAS), an optional security upgrade designed to fortify ChatGPT and Codex accounts against sophisticated takeover attempts. The feature adds an extra layer of protection that goes beyond standard two‑factor authentication, aiming to make credential‑theft and phishing attacks substantially harder. By introducing AAS, OpenAI responds to growing concerns that AI‑powered accounts now store valuable personal and professional data, making them attractive targets for attackers.
Motivation Behind the Feature
OpenAI emphasized that many users rely on ChatGPT for deeply personal questions and increasingly high‑stakes work, ranging from journalistic investigations to policy‑making and scientific research. Over time, an account can accumulate sensitive context, become a hub for connected tools, and sit at the center of critical workflows. For groups such as journalists, elected officials, political dissidents, researchers, and security‑conscious individuals, the potential fallout from a compromised account is especially severe, prompting the need for a stronger safeguard.
How Authentication Changes
When a user enables Advanced Account Security, regular passwords are no longer usable. Instead, the account must be protected by two physical security keys or passkeys (e.g., YubiKeys or device‑based passkeys). This requirement forces attackers to possess both hardware tokens, dramatically reducing the likelihood of successful phishing or credential‑stuffing attempts. The shift to hardware‑based authentication aligns with the phishing‑resistant standards advocated by industry leaders.
Account Recovery Modifications
AAS also eliminates email and SMS recovery routes, which are common vectors for social‑engineering attacks. Account recovery can now only be performed using recovery keys, backup passkeys, or additional physical security keys that the user has previously generated and stored safely. By removing weaker recovery channels, OpenAI ensures that even if an attacker gains access to a user’s email or phone, they cannot reset the account without the authorized hardware tokens.
Support Limitations
A critical design choice is that OpenAI’s support team cannot assist with account recovery once Advanced Account Security is enabled. Because support no longer has access to or control over the recovery options, attackers cannot exploit support portals through impersonation or social engineering to hijack accounts. This restriction places the recovery burden squarely on the user, reinforcing the importance of safeguarding backup keys and hardware tokens.
Session Management and Alerts
The feature enforces shorter sign‑in windows and session durations, requiring users to re‑authenticate more frequently on each device. Additionally, any login to a locked‑down account triggers an alert that appears in the user’s dashboard, where active ChatGPT and Codex sessions can be reviewed. These mechanisms provide near‑real‑time visibility into account activity, helping users spot and respond to unauthorized access promptly.
Privacy and Model Training Opt‑Out
For users who enable Advanced Account Security, the option to exclude their conversations from model training is activated by default. While all users can manually opt out, AAS users receive this protection automatically, reducing the risk that sensitive interactions could be inadvertently used to improve OpenAI’s models. This default setting reflects the heightened privacy expectations of the feature’s target audience.
Partnership with Yubico
To lower the barrier to adoption, OpenAI has partnered with Yubico to offer discounted YubiKey bundles specifically for Advanced Account Security subscribers. The collaboration aims to make reputable hardware tokens more accessible, encouraging a broader user base to adopt the phishing‑resistant authentication method without incurring prohibitive costs.
Requirements for Trusted Access for Cyber Program
Members of OpenAI’s Trusted Access for Cyber program—which grants cybersecurity professionals, researchers, and others early access to new models—will be required to enable Advanced Account Security beginning June 1. Alternatively, they may submit an attestation demonstrating that they employ phishing‑resistant authentication through an enterprise single sign‑on (SSO) mechanism. This mandate ensures that high‑privilege users adhere to the same stringent security standards that protect the broader community.
Broader Context and Industry Comparison
Advanced Account Security fits within a larger trend of providing elevated protection tiers for high‑risk accounts, exemplified by Google’s Advanced Protection program, which has existed for nearly a decade. As AI services become integral to personal and professional workflows, the demand for robust defenses against credential theft, phishing, and account hijacking has intensified. OpenAI’s launch reflects its commitment to a comprehensive cybersecurity strategy, addressing both technical controls and user‑centric safeguards to protect the growing ecosystem of AI‑powered tools.

