There's a third option, I think: #3 he may evaluate monsters against a new metric for monster deadliness (possibly one which takes monster tactics into account), possibly one invented by someone else, before inserting them into his adventure. I
think this is distinct from both #1 (acute awareness of every detail) and #2 (complete indifference to outcomes).
Employing that metric may or may not be as simple as adding up a bunch of numbers and looking the result up in a table. It could a neural network, or a support vector machine, or some kind of deep learning algorithm, but at the end of the day what you're doing is taking a bunch of known inputs (monster stats and behaviors; the circumstances under which the encounter occurs) and partially-known inputs (player stats and behaviors; if you're writing a published adventure or if you have objections to too much anti-PC customization, the amount of information you have here could be very limited) and some unknowable stochastic inputs (die rolls) and trying to say something about the outputs (e.g. how likely the players are to TPK, or what fraction of total PC resources are likely to be expended).
You can tell I've been thinking a lot lately about how to apply machine learning to 5E, and what kinds of predictions might be useful to make.
I've considered the idea of rating encounters in terms of "Champions", as in "this is a Champion-4/15 adventure" meaning "four 15th level Champions played straightforwardly have a 50% chance of at least one of them surviving." Then you could also quantify things like, "If the party finds the Sunsword, Strahd drops from a Champion 3/10 threat to a 2/10 or 1/13," or "letting Strahd exploit Greater Invisibility, crazy Stealth, and his legendary actions increases his difficulty from Champion-2/10 to Champion 5/10." I'm not sure if that's the best form for guidance to take but it's something to consider. Input is welcome. Would that kind of language be useful?