This The Hill article by
and me helps understand why the next Congress will have to make three decisions about AI:What values will guide our policy: national security, market efficiency, power concentration, AI safety, industrial policy, or the public interest? Only elected officials can legitimately represent the American people in balancing these needs.
Should long-term existential risks shape a regulatory focus on the technology, or should we prioritize existing harms and biases?
Do we need a new AI or Digital Technologies Agency, or should the regulation and its enforcement be delegated to existing agencies?