OpenAI Rolls Out ‘Advanced’ Security Mode for At-Risk Accounts

OpenAI is rolling out Advanced Account Security for people concerned that their ChatGPT or Codex accounts could be potential targets of phishing attacks.​In an era where digital security is paramount, OpenAI has taken a significant step forward to ensure the safety of its users. On Thursday, the company announced the introduction of an optional, advanced level of account protection, aptly named Advanced Account Security. This feature is designed to fortify user accounts against potential attacks, making it exceedingly difficult for unauthorized individuals to gain access.

While the concept of advanced account security is not novel—Google, for instance, has been offering its Advanced Protection tier for nearly a decade—the rapid proliferation of mainstream AI services worldwide has underscored the urgent need for robust protective measures. OpenAI’s introduction of this feature is part of a broader cybersecurity strategy the company unveiled earlier this month.

OpenAI recognizes the growing reliance on AI for a wide range of applications, from answering deeply personal questions to performing high-stakes tasks. Over time, a ChatGPT account can become a repository of sensitive personal and professional information, serving as a hub for connected tools and workflows. For certain individuals—journalists, elected officials, political dissidents, researchers, and those particularly conscious of security—the stakes are even higher.

The Advanced Account Security feature fundamentally changes how users access their accounts. Regular passwords are no longer sufficient. Instead, users must add two physical security keys or passkeys, significantly reducing the risk of successful phishing attacks. The feature also eliminates the use of email and SMS texts for account recovery. Instead, users must rely on recovery keys, backup passkeys, or physical security keys. To facilitate this, OpenAI has partnered with Yubico to offer lower-cost YubiKey bundles to users who enable Advanced Account Security.

One crucial aspect of this feature is that once a user enables Advanced Account Security, they can no longer seek assistance from OpenAI’s support team for account recovery. This is because the support team no longer has access or control over any of the recovery options. This measure is designed to prevent attackers from attempting to breach accounts by targeting support portals with social engineering attacks.

Furthermore, Advanced Account Security enforces shorter sign-in windows and sessions before a user has to log in again on a device. It also generates alerts anytime someone logs in to the secured account, directing the user to the dashboard for reviewing active ChatGPT and Codex sessions. While OpenAI offers the option for any user to opt out of having their ChatGPT conversations used for model training, this exclusion is automatically enabled for Advanced Account Security users.

OpenAI’s Trusted Access for Cyber program members, which includes cybersecurity professionals, researchers, and others with advanced access to new models, will be required to enable Advanced Account Security starting June 1. Alternatively, they can submit an attestation that they implement phishing-resistant authentication through an enterprise single sign-on mechanism.

In conclusion, OpenAI’s introduction of Advanced Account Security is a significant stride in the realm of digital security. As AI services continue to proliferate and become increasingly integrated into our personal and professional lives, the need for robust security measures cannot be overstated. By implementing stringent access controls and eliminating potential avenues for account breaches, OpenAI is setting a high standard for account security in the AI industry. 

Leave a Reply

Your email address will not be published. Required fields are marked *