Governments are coming out with new laws and regulations aimed at containing the risks posed by generative AI. They won’t work because they won’t be able to overcome three obstacles. A better approach is to regulate the development processes used to develop generative AI and to embed laws within software systems.
Efforts to regulate artificial intelligence have ramped up in recent weeks. Just yesterday, the Biden administration announced a sweeping new executive order which aims to reshape the federal government’s approach to AI. That order relies on a Korean War-era law to compel companies developing certain high-impact generative AI models to notify the government and to share their testing results, among other provisions. Across the Atlantic, the UK kicks off its so-called AI safety summit this week, while the EU itself is finalizing its AI Act as it seeks to be the global leader in regulating AI. Increasingly, the focus of these new proposals is to reign in the dangers of generative AI.