Posted by Ami Levin on Tuesday, 08 May 2012
Yesterday, at our 118th Israeli SQL Server User Group Meeting, Adi Cohn delivered a session on common performance pitfalls. One of the issues he mentioned is something every DBA and database designer should be aware of. Many applications use overly large size limit for their variable data types. The reasoning in many cases is simply "Why not?" If you have a variable size column and you made the choice to use one of the VAR data types anyway, why limit the size? Even if you expect to have up to 300 characters in the 'comments' column for example, what is wrong with defining it as 3,000? it's just metadata, right? Wrong...
Although physically, storage will be exactly the same if you define your VARCHAR as 300 or 3,000, it can still make a huge impact on performance. SQL Server needs to estimate the work space that is required for each query and allocate it before the query begins to execute. For variable type columns, the estimation is that the data size will be 1/2 of the max size defined for the column. So if you have a column that is defined as 300 bytes, the allocation estimation will be 150 but for 3,000 - the estimate will be 1,500. On systems that experience memory pressure, this can make a huge difference in performance as your queries will wait for memory allocations, potentially significantly hurting performance in addition to the fact that needless memory will be allocated that could serve SQL Server for much more useful purposes.
The mechanism responsible for handling memory grants is called "Resource Semaphore". You can read more about it in this excellent post by the SQL Server Query Processing team blog: http://blogs.msdn.com/b/sqlqueryprocessing/archive/2010/02/16/understanding-sql-server-memory-grant.aspx
This entry was posted by admin
and tagged common pitfalls
, data types
on Thursday, 29 October 2015 19:46
in blog SQL Server Tuning Blog