Can Ethics Help Restore Internet Freedom and Safety?

Internet services are suffering from various maladies ranging from algorithmic bias to misinformation and online propaganda. Could computer ethics be a remedy? Mozilla’s head Mitchell Baker warns that computer science education without ethics will lead the next generation of technologists to inherit the ethical blind spots of those currently in charge. A number of leaders in the tech industry have lent their support to Mozilla’s Responsible Computer Science Challenge initiative to integrate ethics with undergraduate computer science training. There is a heightened interest in the concept of ethical by design, the idea of baking ethical principles and human values into the software development process from design to deployment.

Ethical education and awareness are important, and there exist a number of useful relevant resources. Most computer science practitioners refer to the codes of ethics and conduct provided by the field’s professional bodies such as the Association for Computing Machinery and the Institute of Electrical and Electronics Engineers, and in the UK the British Computing Society and the Institute of Engineering and Technology. Computer science research is predominantly guided by the principles laid out in the Menlo Report.

But aspirations and reality often diverge, and ethical codes do not directly translate to ethical practice. Or the ethical practices of about five companies to be precise. The concentration of power among a small number of big companies means that their practices define the online experience of the majority of Internet users. I showed this amplified power in my study on the Web’s differential treatment of the users of Tor anonymity network.

Ethical code alone is not enough and needs to be complemented by suitable enforcement and reinforcement. So who will do the job? Currently, for the most part, companies themselves are the judge and jury in how their practices are regulated. This is not a great idea. The obvious misalignment of incentives is aptly captured in an Urdu proverb that means: “The horse and grass can never be friends”. Self-regulation by companies can result in inconsistent and potentially biased regulation patterns, and/or over-regulation to stay legally safe.

Alternatives are unclear and beyond my expertise, but could look like an international multi-stakeholder model of internet governance – a United Nations for the Internet! What’s clear to me is that we don’t want to go down the route of a fragmented Internet shaped by regional regulations. It took decades for the internet to evolve into a global medium of free expression, unprecedented in the history of information empires. Any entity responsible for internet regulation should be compatible with its global dimension and address the associated jurisdictional challenges.

But regulatory governance is not the only challenge. The very notion of regulation in the context of the Internet involves multiple layers of complexities, especially where it concerns automated decision-making systems and user-generated content.

The first challenge relates to legal liability and the difficulty in attribution due to the diversity in online functions (content delivery networks, hosting service providers, social media, ad services etc.) provided by different companies, and their complex interdependencies. Fairness, accountability and transparency in decision-making systems is a hot area of research. The general perception among the community is that algorithms that are not explainable (i.e., even experts cannot tell how they reach their judgements) should not be used in sensitive contexts.

With regard to online content, the challenge is that most platforms also embed content in real time from third parties (e.g., online advertising, product reviews and user comments), which may have used the services of other third parties, and so on. Should these platforms be held liable for oversights of the third parties? What makes it even more complex, for auditing purposes, is that it might not be possible to reproduce the interactions that led to certain content appear on a platform. A recent study highlights similar challenges in the legal analysis of a real case which found gender-based discrimination in employment-related advertisements.

The second challenge is that of technical feasibility, which I described in detail in my written evidence submitted to the UK Parliament Communications Committee’s ongoing inquiry on internet regulation. Online platforms have to moderate an enormous volume of a constant stream of user-generated content. The task can be automated to a degree, but human judgement is still necessary to discern the contextual nuances (e.g., when should nudity be perceived as obscenity?), and to identify a suitable level of intervention (ignore, display warning to viewers, remove, report to the police etc.). Some of these platforms have recruited human moderators. This could work for small to medium platforms. But large platforms just cannot scale to the magnitude and velocity of data produced by millions of their users.

Another important aspect is the nature of communication that needs to be regulated. There is the public sphere (e.g., your Twitter feed or Facebook timeline), and the private sphere (e.g., the messages you exchange with your friends on WhatsApp). The latter is considered confidential data and usually secured via end-to-end encryption. Preemptive regulation of private messages will come at the cost of user privacy. It is the main reason why countering the rise of fake news on WhatsApp is so hard.

The final challenge, and indeed the hardest, relates to moral judgement. It is relatively easy to flag content that is clearly unacceptable, for example, terrorist content and child pornography. But there is a large ethical grey area where opinions tend to be more subjective and vary according to the moral, religious, cultural, social, political and ideological contexts. One person’s free speech is religious hate speech to another. So who will make the decision whether or not to remove or otherwise censor some online content, and what code of ethics will they use? There is also a need for independent adjudication. Who decides when there is a dispute with what should be taken down? There is a bigger societal and philosophical question here, and there are no easy answers.

Ethics are important. They give us a moral framework to operate in and represent a body of shared aspirations and expectations. But ethics are far from a panacea for the internet’s ills. This involves a complex set of questions that cut across multiple disciplines. The big challenge for us is to find ways to uphold freedom of speech, while also supporting the digital economy and users’ online safety and privacy.

Leave a Reply

Your email address will not be published. Required fields are marked *