In a talk recently with some people I had someone note that the always chose to use explicit conversions on data types to prevent any unforeseen issues. That’s what I’d recommend as well. I have seen code in production function for years using implicit conversions, only to start failing when someone finally entered an invalid character in a row.
How does that happen? Usually when someone is using character data types to store data that can be represented as character data, even though the data must be dealt with in it’s native format. An example of this is storing a date as a varchar(10) or sticking numerical quantities in a character field to preserve formatting notations like dollar signs, or commas.
That kind of code can work , pass a QA process, and live for years in a production system. However sooner or later someone will enter data that will break a query and return an error. Depending on your error handling system, this can be problematic to track down because it’s very data dependent. The code might work for some data sets but not for others.
The best advice I can give is to store data in the proper data types whenever possible, and use explicit conversions when comparing data that might be of disparate types. Don’t always expect ’09/01/2001′ to compare to getdate(), and don’t expect ‘1’ to equal 1 in your code. At some point bad data will get into the system and those comparisons will error out.