China’s draft rules on regulating recommendation algorithms address pressing issues but have a taste of authoritarianism
China has made aggressive moves in its tech sector over the past few months, from strong IPOs to limiting gaming hours for kids. A number of legislative instruments are in the process of being adopted, including personal information protection legislation, cyber security legislation, and draft Internet Information Services algorithmic recommendation management provisions.
providing user autonomy
Management provisions issued by China’s Cyberspace Administration are perhaps the most interesting and important intervention among the new set of legislative instruments. The provisions set out procedures and mandates for the regulation of recommendation algorithms that are ubiquitous across e-commerce platforms, social media feeds and gig work platforms. They seek to address concerns of individuals and society, such as user autonomy, economic harm, discrimination and the spread of false information.
Algorithmically curated feeds dominate most of our interactions on the Internet. For example, this article can reach you on social media platforms like Twitter, thanks to a recommendation algorithm. Such an algorithm helps the user navigate information overload and presents content that he considers more relevant to the user. These algorithms learn from user demographics, behavior patterns, user’s location, interests of other users accessing similar content, etc. to deliver content. This limits the autonomy of the user, as the user has little opportunity to choose which content to present. Algorithms have some inherent bias that is learned from their modeling or the data they encounter. This often leads to discriminatory behavior against users.
China is aiming to force recommendation algorithm providers to share the mantle with users. The draft states that users should be allowed to audit and change user tags employed by algorithms to filter the content presented to them. Through this, the draft aims to limit the classifications that the user finds objectionable, allowing the user to choose what is to be presented. This also has implications for staged gig work, where the gig worker can understand the basis of the gigs presented to him. Additionally, Article 17 of the draft specifically attacks labor reform at the algorithmic level, requiring increased working hours, minimum wages and compliance with labor laws.
The draft has a clear emphasis on proactive intervention by recommendation algorithm providers to limit and prevent information disorder. This indicates how China is trying to crack down on false/false/malicious information. This should be read in line with the draft’s clear overtones, which require recommendation algorithm providers to “maintain mainstream value orientation”, “vigorously dissipate positive energy” and “advance the use of algorithms in the direction of good”. Clearly, this is China’s attempt to quell any resentment towards the party and to keep tighter controls on the social narrative.
lessons for the present
Regulating algorithms is inevitable and necessary. The world is lagging behind in such initiatives and China is expected to lead the pack. The draft addresses important issues and establishes certain ideals that should be adopted globally. The regulatory mechanism institutionalizes algorithmic audit and supervision, which is possible for the first time in the world. However, a distinctly Chinese flavor of authoritarianism looms large in the draft rules. China has a less than desired record in independence and is not the ideal choice for setting standards through laws. It would be best for liberal democracies to stay away from these proposals and stick to technologically sound regulation that is free from the ills of censorship and social control.
It is high time for India to make better investments and accelerate legislative action on regulation of data and initiate conversations about regulation of algorithms. India should try to achieve this without emulating China, where this draft only complements several other laws. India must act swiftly to address the legal and social evils of algorithmic decision making. Policy makers must ensure that freedoms, rights and social protections, not rhetoric, inform policy changes.
Algorithms are as fundamental to the modern economy as the engines of the industrial economy. One-size-fits-all algorithmic regulation fails to take into account the dynamic nature of markets. An ideal system should have goal-based legislation that can set regulatory criteria for algorithms. The purpose of such legislation should be to set the normative standards that algorithmic decision-making must adhere to. This should be complemented by regional regulation that accounts for the complexities of markets.
Sapna GK is a Research Analyst at Takshashila Institute, Bangalore
.