Today, Congress approved the TAKE IT DOWN Act, which prohibits the online publication of nonconsensual intimate images and deepfake forgeries. It requires platforms to remove such content within 48 hours of a valid request and imposes criminal penalties, including prison terms, while mandating restitution, forfeiture, and FTC enforcement for noncompliance. The legislation, sponsored by Senator Ted Cruz, passed the Senate with unanimous bipartisan support and was subsequently approved by the House on a rare 409-2 vote with backing from First Lady Melania Trump. It now awaits President Trump’s signature. This represents the first piece of regulatory AI legislation passed by Congress in several years.
Almost thirty years ago, the Telecommunication Decency Act became legislation, and its only surviving piece made history: Section 230, the twenty-six words that created the internet. This law established a liability shield for internet platforms and enabled the creation of the user-generated content economy and Web 2.0. Two years ago, the Supreme Court ruled in Gonzalez v Google, declining to engage and review the merits of Section 230 and referring to a companion case, Twitter v Taamneh, in which it ruled that a social media company like Twitter cannot be held liable for aiding and abetting—in this case terrorism—simply by providing a platform for content, even if the platform displays or recommends such content. This represented a solid victory for defenders of free speech, including the ACLU, but left a question on how far is too far when it comes to liability and internet platforms.
It is in this regulatory vacuum that my team and I proposed the Dynamic Governance Model, a policy-agnostic extra-regulatory mechanism that will allow us to build incrementally and adapt as the technology evolves. Based on our research, we predicted deepfakes as an area for early action by the Congress in its current session, despite some lingering concerns by the industry. The provisions in the TAKE IT DOWN Act create a statutory obligation for platforms to act diligently upon receiving notice of non-consensual content, which parallels the concept of a "duty of care" in legal contexts. This is a material departure from the mostly laissez-faire doctrine adopted until now by the U.S. as it relates to the Tech Industry.
It looks like Congress has finally awakened after a decades-long slumber.