As I said in another part of the thread, I was in a scenario where we were performed millions of inserts into a table of four integers, one row at a time. Generating the string and parsing it again wound up being enough to blow our 10µs time budget.
But with parameters you don't need to do that, or in any other scenario where only (literal) values change.
On the other hand, if you were importing data and each line specified which table and column it had to go into you'd probably have to write a new SQL statement each time.
It was a neutron detector that stored the the position, energy, and time stamp into the database. The design spec was 10,000 neutrons per second, so that was the origin of the time limit.
Of course, half those neutrons arrived within a 20ms window, so we had a buffer to handle the load. However, however, if the average remained above the limit, the buffer would fill up. There was a brief discussion of just ignoring events when the buffer was full, but that could introduce systematic errors in the data that would be impossible to detect, so it was better for the database to just crash.
The solution was to tighten the slits on the neutron beam to lower the count rate to the point that we never filled the buffer. Granted, we were testing a high flux technique that, so that was a bit of a disappointment. Everything else in the instrument could handle an order of magnitude more neutrons, except this database.
By the way, to be fair to the database designers, they were working with Clinton era tech and were probably told that the flux rate would never be a SUSTAINED 10,000 events per second.