Theoretically, UDP would be the best choice if you had the time & money to spend on building a very application-specific layer on top that replicates many of the semantics of TCP. I am not aware of any apps that require 100% of the TCP feature set, so there is always an opportunity to optimize.
You would essentially be saying "I know TCP is great, but we have this one thing we really prefer to do our way so we can justify the cost of developing an in-house mostly-TCP clone and can deal with the caveats of UDP".
If you know your communications channel is very reliable, UDP can be better than TCP.
Now, I am absolutely not advocating that anyone go out and do this. If you are trying to bring a product like Dropbox to market (and you don't have their budget), the last thing you want to do is play games with low-level network abstractions across thousands of potential client device types. TCP is an excellent fit for this use case.
It's an ideal application of TCP. Dropbox servers are continually flooded by traffic from clients, so the good congestion behavior from TCP is valuable. There is also less need to implement error detection/correction/retransmission in higher layers.
I am not saying that it should be done from scratch.
But most recent research done in the recent years about the protocols used on the web tend to be built on top of UDP and not TCP, for many historical reasons.
In theory TCP would be the better choice, but in practice this is more complex than you assume.
I think that many people have a knee-jerk reaction when talking about TCP vs UDP, but they probably don't know as much as they think... (parrots)
The onus is on you to explain why. Why not: Smaller payloads per packet and missing out on all the TCP algorithms already implemented in hardware en route.