> Exploitation of vulnerabilities of persons, manipulation and use of subliminal techniques
techcrunch simplified it.
from my reading, it counts if you are intentionally setting out to build a system to manipulate or deceive people.
edit — here’s the actual text from the act, which makes more clear it’s about whether the deception is purposefully intended for malicious reasons
> the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm
> In addition, common and legitimate commercial practices, for example in the field of advertising, that comply with the applicable law should not, in themselves, be regarded as constituting harmful manipulative AI-enabled practices.
Broadly speaking, I feel whenever you have to build into a law specific cave outs for perfectly innocuous behavior that would otherwise be illegal under the law as written, it's not a very well thought out law.
Either the behavior in question is actually bad in which case there shouldn't be exceptions, or there's actually nothing inherently wrong with it in which case you have misidentified the actual problem and are probably needlessly criminalizing a huge swathe of normal behavior beyond just the one exception you happened to think of.
Funny, I took away pretty much the opposite. That advertising is only "acceptable" because it been here for too long, but is otherwise equally ban-worthy for all the same (reasonable) reasons.
> Exploitation of vulnerabilities of persons, manipulation and use of subliminal techniques
techcrunch simplified it.
from my reading, it counts if you are intentionally setting out to build a system to manipulate or deceive people.
edit — here’s the actual text from the act, which makes more clear it’s about whether the deception is purposefully intended for malicious reasons
> the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm