The Internet enables the free flow of information on an unprecedented scale but to an increasing extent the management of individuals’ fundamental rights, such as privacy and the mediation of free expression, is being left in the hands of private actors. The popularity of a few web platforms across the globe confers on the providers both great power and heavy responsibilities. Free-to-use web platforms are founded on the sale of user data, and the standard terms give providers rights to intrude on every aspect of a user’s online life, while giving users the Hobson’s choice of either agreeing to those terms or not using the platform (the illusion of consent). Meanwhile, the same companies are steadily assuming responsibility for monitoring and censoring harmful content, either as a self-regulatory response to prevent conflicts with national regulatory environments, or to address inaction by states, which bear primary duty for upholding human rights. There is an underlying tension for those companies between self-regulation, on the one hand, and being held accountable for rights violations by states, on the other hand. The incongruity of this position might explain the secrecy surrounding the human systems that companies have developed to monitor content (the illusion of automation). Psychological experiments and opaque algorithms for defining what search results or friends’ updates users see highlight the power of today’s providers over their publics (the illusion of neutrality). Solutions could include provision of paid alternatives, more sophisticated definition and handling of different types of data — public, private, ephemeral, lasting — and the cooperation of all stakeholders in arriving at realistic and robust processes for content moderation that comply with the rule of law.