Prodigy was the online service provider that, along with CompuServe, occupied the center of the 1990s legal battles that established the basis for today’s internet businesses. With their slow dial-up connections, these companies sound like relics from a distant past. But, back then, the action on their bulletin boards was anything but slow or calm. Claims of defamation, libel, and unfair competition clouded the courts. Eventually, a legal precedent was set that exempted service providers from responsibility for user-generated content. A storm was brewing on the horizon.
Technology and the marketplace have changed since the 90s, and so must our expectations about the role and obligations of online platforms. Today the path for the dissemination of information is open and turbocharged by algorithmic recommendations. Social media and technology play a role in spreading misinformation while platforms use personal data and algorithms to target users. Just turn on the news and watch the storm clouds gather: from budget ceiling negotiations in the United States to the threat to democratic institutions in Brazil.
This year, cases involving Google and Twitter have made it to the Supreme Court of the United States. The arguments center on Section 230 of the Communications Decency Act, the law from the 1990s which essentially immunizes platforms like Google, Facebook, TikTok, and Twitter from liability in connection with user-generated content that is posted there. Few imagined what algorithms could do when they passed Section 230. Operating under a shield of immunity, platforms became vehicles for disinformation that radicalizes users. Is this what we want for the future?
I had a career in technology, with more than 30 years in the industry, and I am currently a Fellow at the Advanced Leadership Initiative at Harvard University, focusing on the intersection of technology and democracy. Looking into the technological, economic, social, regulatory, and human rights elements of the problem my eyes opened to the need for a holistic and systemic approach to this problem.
Many of us, driven by professional backgrounds, have a bias to use tech to fight tech. If you are going to engage with robots and AI, you’d better bring your tools to the party. The AI that generates deep fakes can be trained to identify anomalies and seek the truth. Digital watermarking can help us trace information provenance, where it is generated, and by whom. Improved visibility concerning how algorithms make decisions can help us eliminate bias, fight racism, and resist attacks on our democracy.
Certainly, there’s some truth to this, better tech is a big part of how to address the challenge. Think about it: if a robot will review our credit applications, analyze our radiology images, or write a draft for a movie script, we want the data and the algorithms that govern its action to be free from bias. If AI will curate the news that we read, we want it to be based on facts.
But tech alone will not solve this problem. We are living in a new phase of the “Information Revolution” when innovation is impacting and altering the way we work, the nature of the work, the way we communicate, and how we relate to each other. This requires a new approach to rights, a new legal and regulatory framework, and new enforcement mechanisms. Dozens of countries are working on AI and data governance, including aspects of content moderation.
The time for action is now in three areas.
First, those who develop tech should adopt principles of ethical development, respecting privacy and scanning for bias in the data and algorithms. Users of technology in industries from financial services to retail and consumer goods are starting to take a leadership role in championing AI and data ethics. However, there is still a gap between intention and action with a need for more diverse representation in the technical and decision-making teams.
Second, after hearing the oral arguments in Gonzalez v Google and Twitter v. Taamneh, the SCOTUS should not interfere with Section 230. From an economic perspective, the Big Tech business models and the data and AI startup ecosystem assume that platforms are not liable for user-generated content nor for taking down harmful third-party content. Upsetting that would be extremely disruptive. Moreover, from a civil rights perspective, we must preserve free speech. This is especially important for minorities that are typically the first to be subject to acts of censorship.
Third, by not acting on Section 230, the court directs action back to Congress and society, where it belongs. The issues previously discussed should be addressed via legislative action creating a new regulatory framework that strikes a balance between platform liability protection, the amount platforms invest in content moderation, and considerations about the value that our society places on freedom of speech. Liability protection should be limited, extending the current carve-outs related to federal crimes, sex trafficking, and intellectual property violations. The new construct should include provisions for civil rights violations, targeted harassment, incitement to violence, hate speech, and disinformation. This is an opportune moment to create a new government body with a dedicated focus on this area that will act as the enforcement arm to protect consumers and competition.
We need to control the tech that surrounds us. Otherwise, it will either continue to be manipulated by bad actors or become a bad actor itself in the hands of a dystopian artificial master that creates an alternative reality. We can avoid the storm brewing on the horizon. The race is on.
This article represents my personal views. It does not represent the views of any companies I have been or am presently affiliated with.