SQL SQL Server Performance Optimization — Index Optimization and Compression
Indexes directly control the efficiency of access to data for common select, update, insert or delete operations, so designing (and maintaining!) proper
Index structures is one of the most important factors in SQL Optimization and overall database performance. SQL Database compression — minimizing the physical surface area that SQL Server needs to read from or write to — is also a critical factor in overall SQL performance tuning.
Optimal Use of SQL Indexes
The absence, misuse or overuse of indexes — in addition to slowing database performance — can also
cause other issues including concurrency locking, transaction deadlocks and other problems. Build optimized indexes with good SQL Tuning best practices, and your database will
fly. Build the wrong index structures and speed will suffer, sometimes signficantly. For example, too many indexes on a table which is
frequently updated unnecessarily increases the insert and update times required of SQL queries.
Defragmenting SQL Indexes
You defragment indexes in a SQL Server database for the same reason you defrag the hard-drive on your computer: to make the I/O more efficient and thus improve SQL performance. When indexes are built initially, the index corresponds to the same physical location where
the data is stored. After many writes to the database however the data becomes fragmented. This causes the I/O activity required to access a particular set of data to increase, gradually eroding database performance. This is why it is important to rebuild indexes regularly to provide better SQL Tuning — especially for high
frequency update tables.
Optimizing SQL database indexes is not a one-and-done exercize, but rather an on-going process required to insure optimal database performance.
Establish routines that run nightly to check for Table and Index fragmentation.
As a rule of thumb, when fragmentation exceeds 30%, affected tables and indexes should be rebuilt. And the larger the table, and the
more data writes to the table, the more frequently they will need to be rebuilt.
SQL Data Compression
Compression is NOT about disk space (diskspace is cheap!). SQL Database compression is about performance. While SQL Server is
a very fast engine (from a processing perspective), one of the most common bottlenecks to optimal SQL Server performance is reading and writing data
to the disk. SQL Server is designed for very large data – but it can only keep a fraction of the data required in memory – so the SQL engine
relies heavily on reading and writing data to the disk. Minimizing the physical surface area that SQL Server needs to read or write to is
therefor essential to optimal SQL performance tuning.
There is nearly a one-to-one correlation between the number of pages read off a disk and the speed of SQL Server operations.
For example, if you compress the data into one-half of physical footprint, you will generally realize double the performance
in operations against that data. This holds true even with single row operations, because there is a higher likelihood of data being in
the SQL Server’s memory cache. In real world SQL Server applications, 9 out of 10 operations are Read operations, while only a small fraction
of a typical application is actually writing data to the disk. This means that for many tables (large or small – actually more so for
small tables such as lookup tables), after writing a record, it will be read 10 more times, so compression becomes very important to SQL performance tuning!
SQL Server compression occurs at a very low level. Applications that run against the database, and even the T-SQL Code itself, procedures,
functions, tables and indexes are not aware that data is being compressed at such a low level. Therfore, the risk to any application for
compressing data is minimal. Finally, in order to use SQL Server Compression, you must license the Enterprise version of SQL Server 2014 (or 2012/2008). It is not available
in Standard Edition, or SQL Server Express.