Looking more into it, and trying to visualise some of the propositions upthread, I think I'm leaning towards visualising the (mostly non-streamlined) modelling of a shot as a comparison of three circles (as spherical cows in vacuum):
- The projection of the actual target.
- The projection of the achievable area where bullets would likely fall.
- The projection of the target's positional uncertainty.
It seems like if the target's positional uncertainty is less than the bullet-cone projection uncertainty, then the former is of little relevance. Conversely, when positional uncertainty exceeds bullet-fall uncertainty, then it seems like a prudent thing to just expand the projection (cross-section) of the bullet cone and hope for the best hit. When the circles of uncertainty are comparable, things get messy and I'm not sure whether that's worth modelling.
But when one uncertainty significantly exceeds the other, it
seems like it makes skill irrelevant, but maybe that's not quite the case. After all, the positional uncertainty is likely to have a cross-sectional area proportional to the square of the target's 'instant' speed times (reaction delay + bullet flight time). Because as a spherical cow in a vacuum, the target would hypothetically jerk in a random direction at non-enhanced speed, and the shooter would try to adjust for that. And the adjustment speed can perhaps be said to depend on skill level adjusted by gun unwieldiness (Bulk).
This seems to be heading towards the idea of the comparison consisting of picking the worst out of the two comparisons: gun skill + weapon accuracy vs. (quadratic-modifier) range, or gun skill + handling adjustment - flight time penalty vs. (quadratic-modifier) target's sudden-movement speed.
This seems to be heading towards invalidating the relevance of target dodge under some circumstances (when flight time is low but distance contributes more than sudden movements when it comes to the uncertainty circle).