Offer Sensitive and Specific deduplication algorithms instead of New and Legacy
New RefWorks presently offers a choice of two deduplication algorithms that work differently. They find comparable numbers of duplicates, but they do not find the same duplicates. Users have to run both algorithms to find the maximum number of duplicates.
Users would be better off if the choices were between a high-sensitivity search (flag any duplicate picked up by either algorithm) and a high-specificity search (flag only those duplicates picked up by both algorithms).
I am happy to update that this item is now in Planned status and is part of the 2020 roadmap.