If we are talking about the actually root servers, there are 13 redundant names spread out (thanks to anycast) on around 1700 servers located around the world, and the lookup a user would do is cached for 2 days. That mean the highest amount of traffic a system will generate is one request per unique TLD (like .com) per 2 days, and it will fit a single UDP package.
We can then do some guesses about size for questions like "what is the nameservers for .com". Those are a bit larger than most dns queries since the answer is a bit bigger than most, since .com has a lot of nameservers, so lets put it down to 800 bytes. Every 2 day a average use might then, using some guessing, generate maybe 10 kb of traffic, or about 0.015 seconds of watching a 1080p video on youtube.
Everyone used to query the root servers directly from their ISP or corporate edge servers until the big platforms wanted to gather more of everyone's data in the name of "keeping people safe" from "bad ISP's". As with any manipulation campaign there are a few incidents corporate propagandists can site to say, "See! We are protecting you!!" forcing people to debate the issue and knowing the majority will accept the default settings. Blocking all the DoH/DoT resolvers would be trivial for any ISP to do just as I have been doing at home since the inception of DoH.
The root Anycast clusters are absolutely designed to handle the entire internet querying them which I do from Unbound. If one wishes to help reduce load they can enable large memory caches and rewrite min-ttl to something sane to protect the root servers from Amazon EC2's default 5 second ttl and others like them. Blocking known spam and tracking domains also helps reduce the total number of queries. Groups of friends can even further reduce the load by setting up their own DoH/DoT servers using Unbound DNS and sharing the cache and using cron to keep their favorite domains hot in the cache and increasing private by making the crond queries from a VPS node.
Some DNS recursive resolvers have longer-than-desired round-trip times to the closest DNS root server; those resolvers may have difficulty getting responses from the root servers, such as during a network attack. Some DNS recursive resolver operators want to prevent snooping by third parties of requests sent to DNS root servers. In both cases, resolvers can greatly decrease the round-trip time and prevent observation of requests by serving a copy of the full root zone on the same server, such as on a loopback address or in the resolver software. This document shows how to start and maintain such a copy of the root zone that does not cause problems for other users of the DNS, at the cost of adding some operational fragility for the operator.