I have participated several study closeout debrief meetings. Data quality is one of the hot topics that generate a lot of discussions. Everyone has different prospects on how to clean the data depends on which functional group you are coming from.
Data management group will ensure 100% clean for those critical fields and spot check for the rest of fields. Statisticians would like to see 100% clean on all data fields. The argument is that if the data is not critical why collect them. As a statistical programmer, we usually found and reported data issues during the data manipulation process regardless they are critical fields or not.
There is no right or wrong approach to ensure the data quality. Over checking the data is always better than under checking but the return from the investment of time may not be big. So the question is how clean is clean. My simple answer is when we run out of issues to query then the data must be clean enough.