People are already getting worked up about being prompted to opt into a new feature on update (even if that prompt is hidden behind an icon that doesn't do anything until the user clicks it), so it's not inconceivable that the kill switch just disables those opt-in prompts for AI-related features.
I guess that just means that there will be a number of AI related features you can choose to select but if at some point you want it all gone you just hit the checkmark Disable all AI.
I think Facebook did a study that making options opt-in means only a tiny tiny percentage of users will ever activate them. People never look around in settings.
I suppose if - after you click away the popup that says "Thank you for loving Firefox"(1) - a popup shows that says "Hey, hey, look at me, look we have this new feature, it'll blow you away. Do you want to enable it?" would be obnoxious but satisfies the idea of "opt-in".
If it's off be default it will stay off unless the user is somehow made to try it. Default opt-in is one option to do that, the simplest one, but it's not the only one. The rest require explaining clearly what the user will get out of enabling it ... and that often is difficult to do succinctly, or convincingly. So shovelling it down everyone's throat it is.
> making options opt-in means only a tiny tiny percentage of users will ever activate them
Why exactly should I, a user, care about this? I don't want useless crap shoved in my face, period. I don't care that people might not turn on someone's pet feature if they don't enable it by default.
If the responses are sponsored, it seems the value drops dramatically.
I want the AI agent to act more like a fiduciary, an independent 3rd party acting in my best interest. I don't need an AI salesman interjecting itself into my life with compromised incentives.
Us “AI hostile users” are this way partially because we know that our desires do not align with those funding these tools.
OpenAI was already taking steps to integrate ads, amd Grok shows how much we should be trusting AI as some impartial 3rd party. The goal was always about control and profiting off of said control. Pretty much the antithesis of hacker mindsets.
Is there a reason such a thing couldn't present a bunch of neutral options, but with affiliate links that provide revenue back to Mozilla?
(I mean, that could still steer it toward places that have affiliate programs, but if you're running a local AI tool to help you search for these things that seems like something you should reasonably be able to toggle on and off/configure in a system prompt/something.)
What we’ve seen from other companies is exactly what you mention. Unfair ranking and promotion of items with affiliate links or the highest payouts for them. Changing incentives compromise the integrity of the results.