Dynamically increase the sampling limit as necessary to address significant variability in cardinality estimates or underflow in cardinality estimates.
Do not arbitrarily chose among paths spanning the same vertices when we have a cardinality underflow. Instead, deepen the samples until we have enough information to make a choice? This concept should be tested as it might lead to a lot more work and cost might reflect tuples read which do not join which could help break ties for paths with cardinality underflow.
This implies that we need to retain the distribution of the estimated costs for each vertex and cutoff join (but not the sampled solution sets) so we can assess the variability in the estimates as we resample. Since the sample size is increasing with each sample taken, the estimates should rapidly converge. Depending on the nature of the estimate, we may be able to know that it will converge to a lower value (from an upper bound) or that it will converge to a higher value (from an underflow).
If a path segment is a clear winner and leads into a cardinality underflow, then we could execute that join path segment in its entirety in order to obtain an accurate estimate.