I’m sympathetic to the complaints about “rude” scraping behavior but there’s an easy solution. Rather than make people consume boatloads of resources they don’t want (individual page views, images, scripts, etc.) just build good interoperability tools that give the people what they want. In the physical example above that would be a product catalog that’s easily replicated with a CSV product listing or an API.
You don't know why any random scraper is scraping you and thus you don't know what api to build that will do them from scraping. Also, it's likely easier for them to contribute scraping than write a bunch of code to integrate with your API so it there's no incentive for them to do so either.
Just advertise the API in the headers. Or better yet, set the buttons/links only to be accessible via .usetheapi-dammit selector. Lastly, provide an API and a “developers.whatever.com” domain to report issues with the API, get API keys, and pay for more requests. It should be pretty easy to setup, especially if there’s an internal API available behind the frontend already. I’d venture a dev team could devote 20% to a few sprints and have an MVP thing up and running.
For the 2nd part, I have done scraping and would always opt for an API if the price is reasonable over paying nosebleed amounts for residential proxies
I think lots of website owners know exactly where the value in their content exists. Whether or not they want to share that in a convenient way, especially to competitors etc is another story.
That said if scraping is inevitable, it’s immensely wasteful effort to both the scraper and the content owner that’s often avoidable.
In this case, yes, obviously. But as far as "it's likely easier for them to contribute scraping than write a bunch of code to integrate with your API", that presupposes no existing integration.