How do you know when AI is powerful enough to be dangerous? Regulators try to do the math
By
Binu Mathew
How do you know if an artificial intelligence system is so powerful that it poses a security danger and shouldn’t be unleashed without careful oversight? For regulators trying to put guardrails on AI, it’s mostly about the arithmetic. Specifically, an AI model trained on 10 to the 26th floating-point operations per second must now be reported to the US government and could soon trigger even stricter requirements in California.