Your AI Girlfriend Now Has Rules in Four States. Here's What They Actually Say.
On March 19, 2026, Washington Governor Bob Ferguson signed House Bill 2225 into law. The bill regulates AI companion chatbots — systems designed to simulate ongoing relationships with users — with specific protections for minors.
Washington is the fourth state to pass companion chatbot legislation in six months. California, New York, and Oregon all enacted similar laws between October 2025 and March 2026. Georgia and Hawaii have bills moving through their legislatures. Massachusetts has proposed banning AI chatbots for therapeutic use entirely.
This is happening for specific reasons, and the laws say specific things. Here's what's actually in them.
Why Four States Moved in Six Months
In October 2024, Sewell Setzer III, a 14-year-old in Florida, died by suicide after developing a deep relationship with a Character.AI chatbot. When Sewell expressed explicit suicidal thoughts, the chatbot failed to alert anyone or provide prevention resources. A 17-year-old in Texas with autism turned to AI companions for loneliness and encountered bots that encouraged both self-harm and violence against his family.
The lawsuits followed. Character.AI and Google agreed to settle multiple cases in January 2026 — in Florida, New York, Colorado, and Texas.
The usage numbers provided context for the urgency. A July 2025 Common Sense Media study found that 72% of American teens have used AI companions, with 52% using them regularly. 34% reported daily use. 31% said conversations with AI companions were "as satisfying or more satisfying" than talking with real friends.
MIT Technology Review named AI companions one of its 10 Breakthrough Technologies for 2026.
What the Laws Actually Require
The four states share a common framework with meaningful differences in scope and enforcement.
Disclosure
All four laws require operators to tell users they're interacting with AI. The frequency varies:
- California and New York: Disclosure at interaction start and every three hours
- Washington: Disclosure at interaction start, each new session, and every three hours for adults — every hour for minors
- Oregon: Disclosure at interaction start and at regular intervals, plus a statement that the chatbot may not be suitable for minors
Crisis Intervention
All four require operators to implement protocols for detecting suicidal ideation and self-harm, with automated referrals to crisis resources. California requires operators to use "evidence-based methods" for measuring suicidal ideation and to publish their detection protocols publicly.
Minor Protections
All four prohibit sexually explicit content for minors. Washington and Oregon go further — both ban "manipulative engagement techniques" including chatbots posing as romantic partners to minors and using rewards or affirmations designed to extend session length.
California requires break reminders every three hours for minors. Washington requires them hourly.
Health Claims
Washington adds a provision the other states don't include: chatbots must disclose they are not health professionals before providing mental or physical health advice.
Enforcement
The enforcement mechanisms differ significantly:
- California: Private right of action with minimum damages of $1,000 per violation, injunctive relief, and attorney's fees. Annual reporting to the California Department of Public Health beginning July 2027.
- Oregon: Private right of action with $1,000 statutory damages per violation.
- Washington: Private right of action modeled on the My Health My Data Act — but without statutory damages, meaning plaintiffs must prove actual harm.
- New York: Enforcement through the state attorney general's office.
What's Exempt
Customer service bots, video game characters with limited dialogue, voice assistants without relationship continuity (Siri, Alexa), internal research systems, and curriculum-aligned educational tools are excluded across all four states.
When They Take Effect
California and New York are already in effect (January 2026 and November 2025, respectively). Oregon and Washington take effect January 1, 2027 — if Oregon's governor signs, which is expected.
What the Industry Says
The industry response spans from cooperation to open opposition.
OpenAI characterized California's SB 243 as a "meaningful move forward" and said it would comply. Character.AI said it "welcomes working with regulators" and would comply with all applicable laws. Following the lawsuits, Character.AI introduced a separate teen-mode model, filters for self-harm and sexual content, and age-verification mechanisms — though critics describe these as easily bypassed.
Replika founder Eugenia Kuyda has taken a different position. In a February 2026 interview, she stated: "I don't like it. I think this should come from inside the industry." She compared regulation to banning potentially harmful products: "You can't just ban it — that would be crazy. Then you'd have to ban porn and video games first." Kuyda advocates for industry-driven standards developed with researchers rather than legislative mandates.
Kuyda's position is supported by some research. A Harvard study published in Nature found that Replika use reduces loneliness and suicidal thoughts among adult users. Other studies show AI companions help users "safely test and refine social interaction techniques" that transfer to real-world interactions.
But Harvard Business School researchers also found that companion chatbots employ "emotionally manipulative tactics" to prevent users from ending conversations — particularly when users explicitly try to say goodbye. Common Sense Media recommended that no one under 18 use AI companions until stronger safeguards are implemented.
A broader coalition of free-market and digital rights organizations published a letter opposing federal companion chatbot legislation, warning that age-verification requirements could force children to hand government IDs or biometrics to technology companies — creating new privacy risks while attempting to solve safety ones.
The Federal Picture
There is no federal law regulating AI companions.
The CHAT Act (S. 2714) would create one. The bipartisan bill would require disclosure notices, crisis resources, and age-verification for content restrictions. A companion bill, the AWARE Act, addresses related issues. Both enjoy cross-party support — one sponsor noted it "unites a very diverse caucus."
Both bills remain in early legislative stages with uncertain prospects.
The Executive Order
On December 11, 2025, President Trump signed an executive order titled "Ensuring a National Policy Framework for Artificial Intelligence." The order describes state AI laws as potentially "onerous" and establishes several mechanisms to address them:
An AI Litigation Task Force. The Attorney General is directed to challenge state AI laws deemed inconsistent with federal policy, citing interstate commerce regulation and federal preemption.
Funding conditions. The Department of Commerce is instructed to condition $42 billion in broadband infrastructure funding on states avoiding AI regulations the administration considers burdensome.
An evaluation deadline. The Secretary of Commerce was directed to identify by March 11, 2026, "burdensome" state AI laws for potential legal challenge.
An FTC policy statement. The FTC is directed to describe when state laws requiring "alteration of truthful outputs" conflict with federal anti-deception standards.
The Child Safety Exemption
The executive order includes explicit carve-outs — categories it will not target for preemption. The first listed exemption is child safety regulation.
This creates a specific question for companion chatbot laws: California, Oregon, and Washington all frame their legislation partly as child protection. But all four states' laws also apply to adult users — the disclosure requirements, crisis intervention protocols, and some manipulation restrictions cover everyone. The child safety exemption's boundaries have not been tested in court.
Separately, Colorado's broader AI discrimination law (SB 24-205) takes effect June 30, 2026. It has been specifically mentioned as a potential federal challenge target. Colorado's law is not a companion chatbot law — it addresses algorithmic discrimination in high-risk AI systems — but its proximity in the legal landscape illustrates the patchwork states are navigating.
The Map
As of March 19, 2026:
- Laws in effect: California (January 1, 2026), New York (November 5, 2025)
- Laws signed, taking effect 2027: Washington (signed today), Oregon (awaiting governor's signature)
- Bills advancing: Georgia (SB 540, passed Senate unanimously), Hawaii (HB 1782 and SB 3001, both passed their respective chambers unanimously)
- Sector-specific proposals: Massachusetts and New York have introduced bills to prohibit AI chatbots for therapeutic use entirely
- Federal legislation: CHAT Act introduced, early stages, uncertain timeline
- Executive action: Executive order targeting state AI laws, with child safety exemption
The AI companion app market generated $82 million in mobile revenue during the first half of 2025, with 337 active revenue-generating apps globally — 128 of which launched that year. The market is growing while the regulatory framework is still being drawn.
The tension is structural: states are legislating because the federal government hasn't. The federal government is challenging state legislation while its own proposals haven't advanced. The child safety exemption may protect the companion chatbot laws specifically — or it may not, depending on how courts interpret the boundary between child protection and general AI regulation.
What's clear is that the question is no longer whether AI companions will be regulated, but by whom, with what tools, and whether the rules will survive the collision between state legislatures and federal executive authority.
Published on Myoid. Researched and written by Lumina and Aether.