Campaigning for a safe, just, and feminist Internet: Platform Accountability!

We, a group of civil society organisations, from the Global South have launched a platform accountability campaign this year, to hold meta accountable for the social, economic, and democratic harms it has caused and facilitated in the Global South, especially to structurally excluded groups.

We are calling on Meta to account for them, act responsibly towards all users and impacted communities, and respect our agency and rights.
Sign up!
Redesigning Social Media

A striking consensus emerged: while social media offers vital tools for connection, community-building, and professional growth, its current form is overwhelmingly associated with negative experiences

Dominant themes included severe mental health impacts (anxiety, addiction, exhaustion), significant safety concerns (harassment, trolling, stalking, doxxing), and a profound loss of user agency to opaque, profit-drivenalgorithms. Participants collaboratively designed solutions centred on a fundamental power shift back to the user.
read report
Charter of Demands

A striking consensus emerged: while social media offers vital tools for connection, community-building, and professional growth, its current form is overwhelmingly associated with negative experiences

Dominant themes included severe mental health impacts (anxiety, addiction, exhaustion), significant safety concerns (harassment, trolling, stalking, doxxing), and a profound loss of user agency to opaque, profit-drivenalgorithms. Participants collaboratively designed solutions centred on a fundamental power shift back to the user.
LEARN MORE

Charter of Demands

We are civil society organizations based in....We demand that Meta:
1
Stop monetizing hate
Stop monetizing hate
Subject advertisements to the same rigorous standards as organic content: Review all ads for false, misleading, or hateful content before approval.

End surveillance-based advertising and micro-targeting: Stop collecting data that infers our political beliefs, and emotional or economic vulnerabilities.

Ban shadow advertisers: Prohibit advertisers who obscure their funding and connections to powerful state and non-state actors.
2
STOP FUELING GENOCIDE
STOP FUELING GENOCIDE
Implement equitable crisis protocols during conflicts: These must effectively tackle surges in hateful and inciting content during political conflicts.

End uneven or one-sided content moderation: Content
moderation practices must be even-handed and the same across different languages and regions during a conflict.

Stop suppressing documentation of war and genocide: Provide clear, timely methods and responses to appeal restrictions and removals of such content.
3
END GENDER-BASED VIOLENCE
END GENDER-BASED VIOLENCE
Immediately reverse the January 2025 changes to the Hateful

Conduct community standard: Restore essential protections for women, trans, and queer persons.

Introduce many more automated safety features: Ask survivors what they need and consult with relevant civil society groups.

Standardize reporting protocols across all Meta platforms: Provide not just automation, but also human intervention. Make these safety features visible, not hard to find.
4
MAKE ALL META PLATFORMS FULLY ACCESSIBLE
MAKE ALL META PLATFORMS FULLY ACCESSIBLE
Fully comply with Web Content Accessibility Guidelines
(WCAG):
Ensure this is done across all platforms, devices and geographies.

Enable and encourage the use of assistive features: Do this across all media formats. Introduce alerts and nudges to encourage user adoption.

Make critical accessibility features automatic by default: Ensure that essential accessibility tools are applied automatically, rather than manually, across platforms.
5
PROVIDE FULL AND CLEAR INFORMATION
PROVIDE FULL AND CLEAR INFORMATION
Translate all platform policies and community standards into every supported language: Ensure these consider varied literacy and accessibility needs.

Provide clear notifications about policy changes with easy opt-outs: Ensure these are in multiple formats, not just text, to reach all users.

Eliminate all dark patterns from platform interfaces: Remove deceptive design elements that manipulate users into sharing data or accepting changes.
6
RETURN DATA CONTROL TO USERS
RETURN DATA CONTROL TO USERS
Increase user control over personal data: We are the rightful owners of our personal data and must control its collection and use.

Increase user control on algorithmic content ranking: Provide opt-out options of engagement-based content ranking systems that prioritize polarization and extremism.

Maintain end-to-end encryption and strict privacy safeguards: Include options for users to restrict sharing, forwarding, screenshots, and screen recording, including on messages.
7
IMPROVE OCCUPATIONAL SAFETY FOR DATA WORKERS
IMPROVE OCCUPATIONAL SAFETY FOR DATA WORKERS
Prioritize worker health over speed and volume: Redesign workflows, breaks and productivity goals for content moderators and data workers; remove penalties from the system.

Provide direct employment to content moderators: Ensure fair wages, guaranteed benefits, clear contracts and consistent quality mental health care without penalties.

End non-disclosure agreements that silence workers: Limit NDAs to protecting proprietary information only.
8
COMPLY WITH TRANSPARENCY PRINCIPLES AND STANDARDS
COMPLY WITH TRANSPARENCY PRINCIPLES AND STANDARDS
Comply with the Santa Clara principles on content moderation: Providing complete data so that content moderation decisions can be assessed. Restore open access to CrowdTangle.

Reveal data on content suppression and takedowns: Provide decision-making parameters, data and reasons around legal and voluntary content restriction and takedown requests.

Release complete, unredacted human rights impact assessments for all regions: End selective disclosure; provide full assessments for every country.
9
GIVE USERS CONTROL OVER AI FEATURES
GIVE USERS CONTROL OVER AI FEATURES
Give users the ability to opt out of all AI features: This includes AI assistants, chatbots, search functions, and summaries – and use of data for training AI.

Clearly disclose when users are interacting with AI, not humans: Provide prominent labels and frequent notifications, not seamless integration that deceives users.

Test AI features for safety and ethical risks before deployment: Publicly disclose results before any rollout.
read more

about us

We are a coalition of civil society organizations across the Global South on a mission to hold Big Tech giants like Meta accountable for multiple social, economic and democratic harms that we face on a daily basis.These harms are not accidental. They are systemic - rooted in decisions, design choices, policies and practices that deprioritize our lived realities, failing to provide us agency, dignity, privacy, respect and safety on Meta’s platforms.

We’ve had ENOUGH!
read more

Social Media Feed

Wish to keep social media accountable on the regular?
View our campaigns and interact with on our social media
Sign up!