As its user base grows, Bluesky strengthens reporting tools, refines its strike system, and aims to balance free expression with user safety.
Scaling Up Moderation for a Growing Platform
Bluesky, the decentralized social platform and rising alternative to X (formerly Twitter), is rolling out significant changes to its moderation system in version 1.110 of its app. The update introduces a revamped strike system, expanded reporting categories, and improved transparency around enforcement decisions.
- Changes aim to support Bluesky’s rapid user growth and ensure “clear standards and expectations” for behavior.
- These updates arrive amid increased scrutiny over moderation practices, including recent controversies over content interpretation and enforcement.
Expanded Reporting Categories for Better Issue Detection
Bluesky is increasing its reporting categories from six to nine, giving users more granular control over what they flag — and moderators more context for responding.
New categories include:
- Youth Harassment or Bullying
- Eating Disorders
- Human Trafficking Content
These changes also help Bluesky comply with global safety regulations, like the U.K.’s Online Safety Act, and address the rising demand for minor protection online.

Enhanced Strike System with Severity Ratings
Bluesky’s updated strike system assigns severity levels to violations, which directly inform enforcement actions:
- Critical risk content leads to permanent bans.
- Lower-severity offenses may result in warnings, suspensions, or strikes.
- Users with multiple violations risk escalating consequences.
Each enforcement now comes with detailed feedback, including:
- The rule violated
- Severity rating
- Total strikes accumulated
- Suspension duration and appeal options
This aims to eliminate ambiguity and foster a more consistent, transparent moderation process.
Responding to Controversy and Community Expectations
The moderation overhaul comes on the heels of a recent suspension incident involving author Sarah Kendzior, who quoted a Johnny Cash lyric that was interpreted literally by moderators. The case sparked debate about context, satire, and intent in enforcement.
- Bluesky admitted the literal interpretation was the basis for the action but emphasized that threats — even quoted — are treated seriously.
- This incident highlighted the importance of clearer guidelines and better moderation tooling, which the new update aims to address.
However, some users remain dissatisfied with inconsistent enforcement, especially regarding content around trans issues, where criticism of platform decisions continues to stir community tensions.
Balancing Moderation, Diversity, and Platform Identity
Bluesky is actively shaping its identity — not just as a “left-leaning Twitter alternative”, but as a decentralized, inclusive space for all kinds of communities. Yet, that ambition comes with complex challenges:
- Early adopters often sought refuge from Twitter/X’s rightward shift under Elon Musk.
- Meanwhile, Bluesky leadership must balance free expression with safety, while adhering to global moderation laws.
The platform recently blocked access in Mississippi, citing its inability to meet strict age verification laws, which could impose fines of $10,000 per user for noncompliance.
What Else Is New in Version 1.110
Beyond moderation, Bluesky’s latest update includes:
- A dark-mode app icon
- Redesigned reply control settings for posts
- Backend improvements to track enforcement actions internally for better consistency
These changes reflect a broader push toward feature maturity as Bluesky scales and competes with centralized platforms like Threads and X.








