Yeah, though a salt would at least mean you'd have to rebuild the table for each site/database/whatever. However I'm having a hard time seeing how to really protect against this.
The IP is a an identifier, so unlike password salt (where the user is the identifier) you need a way to know what the salt is to hash the IP, and it needs to be consistent.
You can do a lookup table of IP-to-salt, but this either gives away your list of addresses (if only containing IPs you've seen) or is huge (entire ipv4 range), and either way doesn't prevent rainbow tables.
You can have a static salt for the entire site, but again this is not really helping much against rainbow tables (beyond requiring recalculating the table, once).
You could encrypt instead of hash, and then have some policy (e.g. the decryption library/service/piece will only allow decrypting ciphertext newer than 30 days).
If you need the ability to group ciphertexts without decrypting them, you could create a scheme which will make cryptographers cringe, but could be justified in this specific case.
For large sites, you also have the risk that you might be able to say something statistically useful about the plaintext.
For instance, you can probably assert things like which IP blocks are likely to comprise most of the entries in the table or which IP blocks or addresses cannot be in the table.
The IP is a an identifier, so unlike password salt (where the user is the identifier) you need a way to know what the salt is to hash the IP, and it needs to be consistent.
You can do a lookup table of IP-to-salt, but this either gives away your list of addresses (if only containing IPs you've seen) or is huge (entire ipv4 range), and either way doesn't prevent rainbow tables.
You can have a static salt for the entire site, but again this is not really helping much against rainbow tables (beyond requiring recalculating the table, once).
Is there a mitigation I'm not thinking of?