During the delivery of Solid Quality’s DBA Bootcamp, I go over query performance tuning and one of the items we discuss that is very useful when tuning queries is the TotalSubtreeCost value. To be fair, it is TotalSubtreeCost when dealing with the text-based execution plans, actual or estimated, but it has a different name when dealing with the graphical plans: Estimated Subtree Cost. It has a third name when dealing with XML showplans just to make everything even more confusing: StatementSubTreeCost.

So what is TotalSubtreeCost? TotalSubtreeCost is the cost of the query and the cost of each of the statements of the query. If you are interested in how the optimizer determines the cost of a query, you can find the formulas for both SQL Server 2000 and SQL Server 2005 in this nice white paper put out by Microsoft: Batch Compilation, Recompilation, and Plan Caching Issues in SQL Server 2005. While trying to figure out the cost of a query with the formula is interesting, I always tell my students that unless you live in your mother’s basement and have nothing better to do over the weekend, just take the cost of a query at its face value and know that the cost of the query can pretty much be compared across servers and executions of the query, there will be some differences but it will be close enough to know that a query with a cost of 500 is always going to be a more complex query than one with a cost of 10.

So what do I do with the cost of a query when I am tuning the query? First you have to realize that the value of TotalSubtreeCost is going to be cumulative from the first statement execute up to the last statement executed in the query. Then you have to realize that the showplan output that you are seeing is not in order of which statements execute first. Putting these two together, you must reorder the statements, find the TotalSubtreeCost of the first statement, subtract the value of the first statement from the value of the second statement that executes in order to find what the cost of the second statement is – remember the value is cumulative from the first statement to the last so the value shown for each individual statements are the values for all the statements that have previously execute plus the cost for that individual statement.

Confused? Don’t worry, I am writing this and I get confused. It is not easy to figure out the cost of an individual statement using TotalSubtreeCost, what you can do is take the overall value for this column from the top of the execution plan and compare it to different executions of the query after you have done some work on it. As the value for TotalSubtreeCost goes down, you are making the query less expensive to execute. Usually the less expensive the query is, the better optimized it is – I said usually, not always.

Another thing you can do is to look for large leaps in the value of this column between two different statements. An example would be statement 4 executes first with a cost of .0078 and statement 3 executes next but the TotalSubtreeCost value now reads 100.0078. This can give you a clue that statement 3 had an individual cost of 100 and may be a good candidate for optimization. Whether you can actually do anything to statement 3 is a different story.

Given that the higher cost queries are usually more complex and often more work to optimize them, how can you use this to your advantage? One of the things I am toying with is searching through the query DMVs and pulling out a list of queries with the highest execution cost. I can then take these queries and return a list of queries with the most reads, most writes, most recompiles, highest CPU usage, and most duration. Using the TotalSubtreeCost value as a filter will help me eliminate queries that are under 100 in value, a good starting point for most environments as a query with a cost of 100 or more seems to be the queries that will get the attention of the DBA or developer as not performing at an optimal level. You will still need to worry about the queries that are under a cost of 100 if those queries are executed hundreds or thousands of times a second. Another listing of queries you constantly need to have on hand are queries grouped by number of executions per second. This allows you to work on a query even if it only takes 3 seconds to complete but runs hundreds of times a seconds which makes it one of your highest resource consumers.

So how do I find the queries with high TotalSubtreeCost values? First you will need to create a nice little function that Stuart Ozer from Microsoft supplied in this nice little Microsoft Customer Advisory Team blog posting: Obtaining Statement-Level Query Plans. This function will take the XML formatting of the showplans as they exist in the DMVs and parse it out into something useful. After creating this function in your database, you simply utilize the function with the a few other DMVs in a statement like this:

 

select

q.dbid

,q.objectid

,pln.statement_text

,pln.statement_plan

,pln.statement_type

,pln.statement_subtree_cost

,pln.statement_estimated_rows

,pln.statement_optimization_level

FROM sys.dm_exec_cached_plans qs

CROSS APPLY statement_level_query_plan(plan_handle) pln

CROSS APPLY sys.dm_exec_sql_text(plan_handle) q

WHERE pln.statement_subtree_cost > 100

ORDER BY pln.statement_subtree_cost DESC

 

After executing this simple query, you should get back a listing of all queries with a cost of greater than 100. You can then do some more work and use this listing of queries to filter out lower cost queries when creating listings of potential queries that you need to optimize.

Hint: Research sys.dm_exec_query_stats to find out how you can pull execution out of the procedure cache and group them by reads, writes, CPU usage, recompiles, and duration.

I know this blog posting was not very detailed and probably left you with more questions than answers, but that is what it was designed to do. The more work you have to do to answer those questions, the more you will actually remember the information in this blog the next time you need it.

Until next time, have a great day and remember me for your SQL Server training needs.