Exactly! Reduce "random_page_cost" down (default is 4) and this becomes a non-issue. It's just one of the less well known things and somewhat difficult to diagnose. What I mean is that what you will see is your database returned rows metric will rapidly climb but rows fetched will hold steady (or decrease). Based on that alone it is a bit of a leap (or at least it was for me at the time) to conclude that the query planner was not using an index on a table that was clearly indexed.
Once you understand what is going on, it makes perfect sense. It's just getting there that is the trick.
Out of curiosity, do these optimizations / calculations find their way into scripts like postgresqltuner [1] or do most DBA's here just apply their own learned optimizations manually?
This is the type of thing that I hope make into a tool like postgresqltuner, although I must confess this is the first time I have heard of it (great looking utility though!). I am more of a generalist that happens to have spent time scaling Postgres so I can't comment much on what a "true" DBA would do, but I find myself applying sensible settings in the postgres config for parameters I have used before. This would be more on the "manual" side of your question. To be fair, I think the relatively straightforward nature of PostgreSQL and the great configuration documentation makes this easy once you know a configuration parameter exists.
If I were to skew more of my development cycles to dedicated database management, I think incorporating analytic tools like postgresqltuner would be a must. Although I would probably cross-reference any suggestions with my past experience and dig into Postgres docs on the specific parameters it highlights. Regardless I suspect it would be a valuable source of additional context.