Is it more efficient to use a varchar
field sized as a power of two vs. another number? I'm thinking no, because for SQL Server the default is 50.
However, I've heard (but never confirmed) that sizing fields as a power of 2 is more efficient because they equate to even bytes, and computers process in bits & bytes.
So, does a field declared as varchar(32)
or varchar(64)
have any real benefit over varchar(50)
?