I'm not sure what they intended this to apply to. LLM based systems don't change their own operation (at least, not more so than anything with a database).
We'll probably have to wait until they fine someone a zillion dollars to figure out what they actually meant.
For LLMs we have "for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
For either option you can trace the intention of the definitions to "was it a human coding the decision or not". Did a human decide the branches of the literal or figurative "if"?
The distinction is accountability. Determining whether a human decided the outcome, or it was decided by an obscure black box where data is algebraically twisted and turned in a way no human can fully predict today.
Legally that accountability makes all the difference. It's why companies scurry to use AI for all the crap they want to wash their hands of. "Unacceptable risk AI" will probably simply mean "AI where no human accepted the risk", and with it the legal repercussions for the AI's output.
> We'll probably have to wait until they fine someone a zillion dollars to figure out what they actually meant.
In reality, we will wait until someone violates the obvious spirit of this so egregiously and ignore multiple warnings to that end and wind up in court (a la the GDPR suits).
This seems pretty clear.
We'll probably have to wait until they fine someone a zillion dollars to figure out what they actually meant.