300x250 AD TOP

Search This Blog

Pages

Paling Dilihat

Powered by Blogger.

Showing posts with label sql. Show all posts
Showing posts with label sql. Show all posts

Friday, August 3, 2012

Copy Multiple Records and Their Related Records With SQL

A nice little trick for copying multiple records and their related records with SQL.

Suppose you have a Users and Phones tables, you would like to copy users 1-3, you could go with a a cursor and copy one by one and all their phones. 

But

Its a waste of perfectly good CPU cycles.

Here's an example:


declare @fromUserId int = 1
declare @toUserId int = 3

declare @UsersOldNew table (OriginalUserId int, NewUserId int)

merge into Users u
using 
(
    select UserId, Username
    from Users
    where UserId between @fromUserId and @toUserId
) as originalUsers
    on 1=0
when not matched then
    insert (Username)
    values (originalUsers.Username + '_from_' + convert(nvarchar(10),originalUsers.UserID))
output originalUsers.UserId, inserted.UserId into @UsersOldNew;


insert into UserPhones
select NewUserId, PhoneNumber
from UserPhones
    join @UsersOldNew
        on [@UsersOldNew].OriginalUserId = UserPhones.UserID


Cool!

So how did we do it?
First, we declared a table variable to hold the old vs new identity, you can do it with any type of identity you want, int, guid, it doesn't mind.

Then we executed a merge with an always false match, this causes the merge to create a new record for every existing record (returned from originalUsers).

The 3rd part of the merge is an output clause, which writes a record of old and new identities to the temp table.

And the last part just joins on this table and insert a new phone record the old userid had but with the new userid inserted.

Simple, fast and very cool! good job Microsoft!

Tags: , ,

Monday, June 11, 2012

SQL Query Engine Without Dynamic Queries

There are many ways to implement query engines, dynamic SQL, stored procedures with finite amount of IFs, dynamically creating stored procedures for each query  and a solution I'm writing here.

(I'm sure there are other solutions, I'll be glad to hear about them...)

Dynamic SQL is probably the fastest if you need to execute a one time query, but if multiple actions execute different queries the sql query engine will be very busy with building query plans and executing them, if you need more performance, sometimes the amount of indexes you'll need to create is significant and then the updates and deletes become very slow.

The problem with stored procedure is either your one stored procedures is going to be a maintenance nightmare or if you're creating stored procedures dynamically, its going to be a nightmare to write the code that maintains these stored procedures.

And then again, there's the indexes issue which must be addressed.

I've figured out another way of creating dynamic queries, its not very fast since its using cross joins, but its very quick to set up, so you can get started with low volume tables (< 100k records).

So lets start with our data structure:

1. The Entities table


CREATE TABLE Entities (
  EntityId int  IDENTITY(1,1) NOT NULL,
  Name varchar(255) default NULL,
  Phone varchar(100) default NULL,
  Email varchar(255) default NULL,
  Address varchar(255) default NULL,
  City varchar(50) default NULL,
  Zip varchar(10) default NULL,
  State varchar(50) default NULL,
  Country varchar(50) default NULL,
  BirthDate varchar(50) default NULL
)


2. The Query table


CREATE TABLE Queries
(
    QueryId int NOT NULL IDENTITY (1, 1),
    Name varchar(100) NOT NULL
)


3. The Query Items table


create table QueryItems
(
    QueryItemId int not null identity (1,1),
    QueryId int,
    GroupId tinyint,
    AttributeCode tinyint,
    Value nvarchar(255)
)


The Entities table is pretty straight forward and so is the Queries table.

The QueryItems table has GroupId for queries with groups of items (each group is ANDed, each item inside a group is ORed), the AttributeCode is what this item is asking for, 0 for Country, 1 for Birthday Month and 2 for Email Contains queries, so all possible combinations of these attributes, groups and values will produce a correct answer.

And for the query engine part:


WITH cte AS 
(
   SELECT EntitiesRulesGroups.EntityId, EntitiesRulesGroups.QueryId, MatchingUserGroups.EntityId AS MatchingEntityId
   FROM (
    SELECT EntityId, QueryId, GroupId
    FROM Entities
    CROSS JOIN (
     SELECT DISTINCT QueryId, GroupId
     FROM QueryItems
     ) AS QueryItemsGroups
    ) EntitiesRulesGroups
   LEFT OUTER JOIN 
   (
    SELECT EntityId, QueryId, GroupId, Sum(MatchBit) AS MatchBit
    FROM 
    (
         SELECT EntityId, QueryItems.QueryId, QueryItems.GroupId, 
            (
       /*Country*/            (CASE WHEN ((AttributeCode = 0) AND (Entities.Country = QueryItems.Value))                      THEN 1 ELSE 0 END) +
       /*Birthday Month*/   (CASE WHEN ((AttributeCode = 1) AND (datepart(MONTH, Entities.BirthDate) = QueryItems.Value)) THEN 1 ELSE 0 END) +
       /*email contains*/   (CASE WHEN ((AttributeCode = 2) AND (Entities.email like '%' + QueryItems.Value + '%'))          THEN 1 ELSE 0 END) 
            ) AS MatchBit
            
         FROM Entities
     CROSS JOIN QueryItems
    ) as Matching
   WHERE (MatchBit > 0)
   GROUP BY EntityId, QueryId, GroupId
   ) AS MatchingUserGroups
    ON MatchingUserGroups.EntityId = EntitiesRulesGroups.EntityId AND MatchingUserGroups.QueryId = EntitiesRulesGroups.QueryId AND MatchingUserGroups.GroupId = EntitiesRulesGroups.GroupId
)


select EntityId, QueryId
from Entities
cross join Queries
where not exists
(
    SELECT *
    FROM cte
    WHERE cte.EntityId = Entities.EntityId AND cte.QueryId = Queries.QueryId AND MatchingEntityId IS NULL
)


So what's happening here?

First, we're writing a CTE that will cross check all entities against all query items, so we're creating a big table that tells us if an entity is valid for a specific query item.

In the actual query, we're generating a cross on all entities and all queries and then we're asking the CTE if there is not match in the list.

So why aren't we checking if there is a record instead of there is no nulls?

Well, if there is a record will only tell us if there is a match between a query item and an entity, but there is no way to detect if there are no matches for the ANDed groups except asking if there was a failure to compare a group and an entity.

Of curse its a slow query, its a cross join on multiple tables and there is no way for the query optimizer to do anything, the amount of comparison that is done its almost as comparing each item in code rather than in SQL, but the advantage of it is that everything is done inside the SQL without cursors so data access is actually faster.

There are ways to improve this engine, I'm showing here only the basics for better understanding:

1. Caching the results and only update the cache on changes to either a specific entity against all queries or for a specific query against all entities, then querying the cache is very fast.
If the cache ever gets out of sync, its relatively quick to do a rebuild of the results. You should read up on SQL Merge.
2. Building a pre-calculated temp tables for the queries and the entities so a cast or convert is done only once for an entity <> query item pair.
3. Use with (nolock) where applicable.

I've tested this method with 500k entities (and their own entities - up to 10 sub entities) and 1k queries, the performance was about 500ms for a query to all entities and entity to all queries and a total cache rebuild was about 5 minutes, your results may vary based on too many things to count.

Tags: , ,

Wednesday, June 6, 2012

Unused/Duplicate Indexes in SQL Server

One of the most common tasks when maintaining a database still under development is cleanup. 

Cleaning up unused/duplicate indexes is important, say you created a great index in the past for a particularly slow query, two months after that, there was a redesign for that section of the code making your old index obsolete. 

Unused/duplicate indexes waste space, more so, they slow down updates and can even slow down queries if the query engine chooses a very bad index as its datasource, we don't want that in our pristine database.

So now what? in a large database going after every table, every query and every index is sometimes impractical and narrowing the number of indexes you need to verify could be the only way to go.

There are two almost exact ways of doing it, one is with Microsoft SQL Server Management Studio and the other is via query.

In the Management Studio, go to Object Explorer, right click the desired database, Reports, Standard Reports, Index Usage Statistics.

You'll get something similar to this:


So what are all these numbers? What should you look at?

The most important information to you are the number of seeks, scans and updates. The last user seek, scan, lookup times.

The number of user Seeks, Scans and Updates are important to know which index is most/least active, but what if the most active index is also a one deprecated a month ago and nobody noticed it? that's why the dates are there.

Note that Index Usage Statistics only shows indexes that were actually touched by the system, either by updates or by queries.

To find really dead indexes, you should also take a look at Index Operational Statistics, look for indexes without any Range Scans or Singleton Lookups.

In the table below I've added an index but didn't bother to query by it, so the last time time that index was touched was on creation, the table was only touched by queries not requiring it so it doesn't show up in the Index Usage Statistics.

Another very important thing to notice are indexes with excessive scans and no seeks, indexes are there to reduce the load on the database, its cool that the database can use them also by their includes to shrink down the subset of the data to work with, but a lot of the time its an indication there's a rogue query on the loose, catch it before it becomes an issue.


In my opinion the information piece missing most from this report is the size of the index, as much as its bad practice to keep an unused index, for example, if its 1k in size and it doesn't slow down any updates its a waste of your time and resources to even look at it (unless its your task to clean up and not to enhance performance).

Use common sense, if the index is on a table you don't know what it does, check with your team mates who last handled that table before you make ANY decisions, could be a report table for the bookkeeping department, executed once a year and when they need it all of the sudden they are doing over night because the reports didn't came out in time... measure twice, cut once.


Martin Thwaites has a nice article in CodeProject, he wrote a query (I Just added the last write time) that shows you all this info and the size of the index, thanks Martin!


SELECT 
    sch.name + '.' + t.name AS [Table Name], 
    i.name AS [Index Name], 
    i.type_desc,  
    ISNULL(user_updates,0) AS [Total Writes], 
    ISNULL(user_seeks + user_scans + user_lookups,0) AS [Total Reads],
    s.last_user_seek, 
    s.last_user_scan ,
    s.last_user_lookup,
    s.last_user_update,
    ISNULL(user_updates,0) - ISNULL((user_seeks + user_scans + user_lookups),0) AS [Difference],  
    p.reserved_page_count * 8.0 / 1024 as SpaceInMB
FROM sys.indexes AS i WITH (NOLOCK)  
    LEFT OUTER JOIN sys.dm_db_index_usage_stats AS s    WITH (NOLOCK) ON s.object_id = i.object_id  AND i.index_id = s.index_id  AND s.database_id=db_id()  AND objectproperty(s.object_id,'IsUserTable') = 1  
    INNER JOIN        sys.tables                    AS t    WITH (NOLOCK) ON i.object_id = t.object_id  
    INNER JOIN        sys.schemas                    AS sch    WITH (NOLOCK) ON t.schema_id = sch.schema_id  
    LEFT OUTER JOIN sys.dm_db_partition_stats    AS p    WITH (NOLOCK) ON i.index_id = p.index_id and i.object_id = p.object_id
WHERE (1=1)
    --AND ISNULL(user_updates,0) >= ISNULL((user_seeks + user_scans + user_lookups),0) --shows all indexes including those that have not been used  
    --AND ISNULL(user_updates,0) - ISNULL((user_seeks + user_scans + user_lookups),0)>0 --only shows those indexes which have been used  
    --AND i.index_id > 1            -- Only non-first indexes (I.E. non-primary key)
    --AND i.is_primary_key<>1        -- Only those that are not defined as a Primary Key)
    --AND i.is_unique_constraint<>1 -- Only those that are not classed as "UniqueConstraints".  
ORDER BY [Table Name], [index name]




Tags: ,

Friday, May 25, 2012

Hierarchy Cache



In the past I tried to avoid using hierarchies in SQL, but I couldn't find another simple data structure for a system requirement: all elements must be children of other elements with no depth limit.


I was able to change the depth limit to 100.


So I started out with:


CREATE TABLE [dbo].[Hierarchy](
    [ElementId] [int] IDENTITY(1,1) NOT NULL,
    [ParentId] [int] NULL,
 CONSTRAINT [PK_Hierarchy] PRIMARY KEY CLUSTERED 
(
    [ElementId] ASC
)


at first everything worked fine, I could traverse the tree with CTE without any problem, it was even fast for the current load, then came another system requirement, the tree size will be about 1 million records.


with cte as
(
 select ElementId, ParentId, 0 as Level
 from Hierarchy
 where ParentId is null
 
 union all
 
 select Hierarchy.ElementId, Hierarchy.ParentId, Level + 1
 from Hierarchy
  join cte on cte.ElementId = Hierarchy.ParentId
)

select * from cte


trying to traverse a tree of 1M records with CTE with only 15 levels took more than a production sql should take, even if the system's requirement should be live or near live.


We've been looking for solutions in every corner of the internet, graph databases (we'll still need to use these IDs in the SQL so passing the data back will be a design nightmare or we can move the portions or whole application, I guess these are other options), trying to find all sorts of algorithms (which will be error prune and might be a heavy toll on the development team), CLR functions and of curse scrapping the whole hierarchy idea (no can do, system requirements).


Eventually we came to the conclusion that keeping a cache will not be too costly on the memory, cpu and disk provided that the right indexes will be created and they will provide the best result and the fun part is, its very easy and fast to join the cache table on any other table we'll need. 


But we had to compromise somewhere, we compromised on node parent changes, some of them might cause the entire tree to rebuild.


I built a method to quickly update the tree given an elementid and if elementid was not provided, it will rebuild the whole tree.


I can smell just one little problem, which will be solved when updating records, need to detect circular references.


CREATE TABLE [dbo].[HierarchyCache](
 [OwnerId] [int] NOT NULL,
 [ElementId] [int] NOT NULL
)

And the SQL code updating the HierarchyCache, note that I'm using merge instead of dropping the whole table or writing individual insert/delete.

declare @ElementId int = null


--if no ElementId was supplied, it means we should go over all Hierarchy and rebuild the table
if (@ElementId is null)
begin


    with parentHierarchy (ElementId, ParentId , TopElementId)
    as
    (
        select  ElementId, ParentId, ElementId as TopElementId
        from Hierarchy with (nolock)

        union all

        select  [Hierarchy].ElementId,[Hierarchy].ParentId , pu.TopElementId
        from Hierarchy with (nolock)
        inner join parentHierarchy as pu
            on pu.ParentId = [Hierarchy].ElementId
    )

    --instead of updating the entire table, we're doing a merge, more efficient and easier on the disk
    merge into HierarchyCache as target
    using parentHierarchy
        on (parentHierarchy.TopElementId = target.OwnerId and parentHierarchy.ElementId = target.ElementId)
    when not matched by target then
        insert (ElementId, OwnerId) values (parentHierarchy.ElementId,parentHierarchy.TopElementId)
    when not matched by source then
        delete;

end
else
begin
    --a ElementId was supplied, update only its tree

    with parentHierarchy (ElementId, ParentId  , TopElementId)
    as
    (
        select  ElementId, ParentId , ElementId as TopElementId
        from Hierarchy with (nolock)
        where ElementId = @ElementId

        union all

        select  [Hierarchy].ElementId,(case when (Hierarchy.ParentId = Hierarchy.ElementId) then null else Hierarchy.ParentId end) as ParentId , pu.TopElementId
        from Hierarchy  with (nolock)
        inner join parentHierarchy as pu
            on pu.ParentId = [Hierarchy].ElementId

    )

    --instead of updating the entire table, we're doing a merge, more efficient and easier on the disk
    merge into HierarchyCache as target
    using parentHierarchy
        on (parentHierarchy.TopElementId = target.OwnerId and parentHierarchy.ElementId = target.ElementId)
    when not matched by target then
        insert (ElementId, OwnerId) values (parentHierarchy.ElementId,parentHierarchy.TopElementId);


end



Tags: , , ,

Wednesday, March 14, 2012

SQL Table Row Size

If you ever designed a table, you must have thought how much space this table will take on disk, memory, how can I estimate the amount of memory for the server etc'. well, I can't help you there, there are too many things to take into consideration to write in a short blog post.


However, I can help you with one thing, estimating the size of a row, just double it with the amount of rows you estimate will be in the table and you have a pretty close number, but you do have to know which indexes you'll have, they might even double the amount of space needed, this article will explain in detail how you can get a more accurate estimate.


Thanks to Ruchir T, who partially implemented this article, I just needed to make a few modifications and voila, you can now start estimating.



---- =============================================
---- Author: Ruchir T (http://www.west-wind.com/weblog/posts/2004/Jan/19/Sql-Server-Row-Size-Limit)
---- Create date: 01/02/2008
---- Update date: 2012-03-14 show all row info
---- Description: returns the number of bytes left to use for creating new columns
---- =============================================
CREATE FUNCTION usfTable_estimates
(    
    @tablename char(50)
)
RETURNS 
    @rettable table
    (
        TableName nvarchar(255),
       num_columns int,
       num_fixed_columns int,
       fixed_data_size int,
       num_var_columns int,
       max_var_size int,
       var_data_size int,
       null_bitmap_size int,
       row_size int,
       bytes_available int
    ) 
AS
begin 


DECLARE @num_columns int
DECLARE @result int
DECLARE @num_fixed_columns int
DECLARE @fixed_data_size int
DECLARE @var_data_size int
DECLARE @num_var_columns int
DECLARE @max_var_size int
DECLARE @null_bitmap_size int
DECLARE @row_size int

-- Find the total number of columns
select @num_columns = count(*)
from syscolumns,systypes
where syscolumns.id=object_id(@tablename)
and syscolumns.xtype=systypes.xtype


-- Find the size occupied by fixed length columns (Note: not possible to exist outside the 8060 bytes limit)
select @num_fixed_columns = count(*)
from syscolumns,systypes
where syscolumns.id=object_id(@tablename)
and syscolumns.xtype=systypes.xtype and systypes.variable=0

select @fixed_data_size = sum(syscolumns.length)
from syscolumns,systypes
where syscolumns.id=object_id(@tablename)
and syscolumns.xtype=systypes.xtype and systypes.variable=0

-- Find the size occupied by variable length columns within the 8060 page size limit

-- number of variable length columns
select @num_var_columns=count(*)
from syscolumns, systypes
where syscolumns.id=object_id(@tablename)
and syscolumns.xtype=systypes.xtype and systypes.variable=1
-- max size of all variable length columns
select @max_var_size =max(syscolumns.length)
from syscolumns,systypes
where syscolumns.id=object_id(@tablename)
and syscolumns.xtype=systypes.xtype and systypes.variable=1
-- calculate variable length storage
begin
if @num_var_columns>0
set @var_data_size=2+(@num_var_columns*2)+@max_var_size
--set @var_data_size = @num_var_columns*24
else
set @var_data_size=0
end

-- If there are fixed-length columns in the table, a portion of the row, known as the null bitmap, is reserved to manage column nullability.
select @null_bitmap_size = 2 + ((@num_columns+7)/8)

-- Calculate total rowsize
select @row_size = @fixed_data_size + @var_data_size + @null_bitmap_size + 4

-- Return the available bytes in the row available for expansion
select @result = 8060 - @row_size

--RETURN @result

insert into @rettable
select @tablename as TableName,
       @num_columns as num_columns,
       @num_fixed_columns as num_fixed_columns,
       @fixed_data_size as fixed_data_size,
       @num_var_columns as num_var_columns,
       @max_var_size as max_var_size,
       @var_data_size as var_data_size,
       @null_bitmap_size as null_bitmap_size,
       @row_size as row_size,
       @result as bytes_available
       
      return
    
end
GO
Tags: ,

Tuesday, March 6, 2012

SQL Table Space

I was doing capacity planning the other day and I needed to know which tables take how much space, how many rows they have etc'.

There's a system stored procedure for that, sp_spaceused, but it only works on the whole database or a specific table and i tried pushing its results into a table and sorting by it, but it converts the numbers to a string so the order didn't exactly worked.

so I looked in the sp's innards and wrote the following:



select stats.name,
      row_count,
       stats.reserved_page_count * 8 as reservedKB,
       stats.pages * 8 as dataKB,
       (CASE WHEN stats.used_page_count > stats.pages THEN (stats.used_page_count - stats.pages) ELSE 0 END) * 8 as index_sizeKB,
       (CASE WHEN stats.reserved_page_count > stats.used_page_count THEN (stats.reserved_page_count - stats.used_page_count) ELSE 0 END) * 8 as unusedKB
from
(
    select name, 
           
            sum(reserved_page_count) as reserved_page_count, 
            sum(used_page_count) as used_page_count, 
            sum(
                CASE
                    WHEN (index_id < 2) THEN (in_row_data_page_count + lob_used_page_count + row_overflow_used_page_count)
                    ELSE lob_used_page_count + row_overflow_used_page_count
                END
                ) as pages
                
    FROM sys.dm_db_partition_stats
    join sys.objects on sys.objects.object_id = sys.dm_db_partition_stats.object_id
        WHERE sys.objects.type = 'U'
    group by name,sys.dm_db_partition_stats.object_id
) as stats
join
(
    select name, max(row_count) as row_count
    FROM sys.dm_db_partition_stats
        join sys.objects on sys.objects.object_id = sys.dm_db_partition_stats.object_id
            WHERE sys.objects.type = 'U'
    group by name
) as rowcountstats
    on rowcountstats.name = stats.name

order by row_count desc
Tags: , , , ,

Monday, March 5, 2012

Rebuild database

While still in design and development, a database goes through many changes, tables and columns are added and removed, records are added and deleted, sometimes performance tests are performed, a data structure goes through sanity testing, indexes get fragmented, etc'.


this leaves a lot of junk in the database files, sometimes this space is reclaimed by the SQL engine and sometimes you need to ask for it specifically, that is the purpose of this stored procedure.


I've used portions of it over the years but only this time I've decided to collect all of it into a single stored procedure, the dynamic nature of the current project I'm working on makes me execute this once in a while. I thought I'd share this with you, please be aware that this should not be executed in a production environment, you must read and understand what its doing before you execute it, it might cause some serious data loss in specific cases.


To help you avoid making that mistake, I've commented all the contents of the stored procedure and put a return in the beginning of it. 



-- =============================================
-- Author:  Dror Gluska
-- Create date: 2012-03-05
-- Description: Performs cleanup/rebuild on all indexes, freeing space and accelerating queries
-- =============================================
create PROCEDURE [dbo].[usp_DBCCCleanup]
AS
BEGIN
 -- SET NOCOUNT ON added to prevent extra result sets from
 SET NOCOUNT ON;

    return;

----rebuild tables

--print 'Rebuilding tables...'

--declare @rebuildtables table(name nvarchar(255), cmd nvarchar(max));

--insert into @rebuildtables
--SELECT  o.[name],'ALTER TABLE ' + '[' + s.[name] + ']'+'.' + '[' + o.[name] + ']' + ' REBUILD ;'
--FROM sys.objects AS o WITH (NOLOCK)
--INNER JOIN sys.indexes AS i WITH (NOLOCK)
--ON o.[object_id] = i.[object_id]
--INNER JOIN sys.schemas AS s WITH (NOLOCK)
--ON o.[schema_id] = s.[schema_id]
--INNER JOIN sys.dm_db_partition_stats AS ps WITH (NOLOCK)
--ON i.[object_id] = ps.[object_id] AND ps.[index_id] = i.[index_id]
--WHERE o.[type] = 'U' ORDER BY ps.[reserved_page_count]

--declare c_rebuild cursor for select name, cmd from @rebuildtables

--open c_rebuild

--declare @tname nvarchar(255), @tcmd nvarchar(max);

--fetch next from c_rebuild into @tname, @tcmd
--while (@@fetch_status <> -1)
--begin
-- print 'Rebuilding ' + @tname
-- exec sp_executesql @tcmd
 
-- fetch next from c_rebuild into @tname, @tcmd
--end
--close c_rebuild
--deallocate c_rebuild


--print 'Done rebuilding tables'

--print 'Rebuilding indexes...'
----taken from http://technet.microsoft.com/en-us/library/bb838727(v=office.12).aspx

--DECLARE @objectid int;
--DECLARE @indexid int;
--DECLARE @partitioncount bigint;
--DECLARE @schemaname nvarchar(130);
--DECLARE @objectname nvarchar(130);
--DECLARE @indexname nvarchar(130);
--DECLARE @partitionnum bigint;
--DECLARE @partitions bigint;
--DECLARE @frag float;
--DECLARE @command nvarchar(4000);
--DECLARE @dbid smallint;

---- Conditionally select tables and indexes from the sys.dm_db_index_physical_stats function
---- and convert object and index IDs to names.

--SET @dbid = DB_ID();

--SELECT
--    [object_id] AS objectid,
--    index_id AS indexid,
--    partition_number AS partitionnum,
--    avg_fragmentation_in_percent AS frag, page_count
--INTO #work_to_do
--FROM sys.dm_db_index_physical_stats (@dbid, NULL, NULL , NULL, N'LIMITED')
--WHERE
----avg_fragmentation_in_percent > 10.0  -- Allow limited fragmentation
----AND
--index_id > 0 -- Ignore heaps
----AND page_count > 25; -- Ignore small tables
---- Declare the cursor for the list of partitions to be processed.
--DECLARE partitions CURSOR FOR SELECT objectid,indexid, partitionnum,frag FROM #work_to_do;
---- Open the cursor.
--OPEN partitions;
---- Loop through the partitions.
--WHILE (1=1)
--BEGIN
--FETCH NEXT
--FROM partitions
--INTO @objectid, @indexid, @partitionnum, @frag;
--IF @@FETCH_STATUS < 0 BREAK;
--SELECT @objectname = QUOTENAME(o.name), @schemaname = QUOTENAME(s.name)
--FROM sys.objects AS o
--JOIN sys.schemas as s ON s.schema_id = o.schema_id
--WHERE o.object_id = @objectid;
--SELECT @indexname = QUOTENAME(name)
--FROM sys.indexes
--WHERE object_id = @objectid AND index_id = @indexid;
--SELECT @partitioncount = count (*)
--FROM sys.partitions
--WHERE object_id = @objectid AND index_id = @indexid;
---- 30 is an arbitrary decision point at which to switch between reorganizing and rebuilding.
--IF @frag < 30.0
--SET @command = N'ALTER INDEX ' + @indexname + N' ON ' + @schemaname + N'.' + @objectname + N' REORGANIZE';
--IF @frag >= 30.0
--SET @command = N'ALTER INDEX ' + @indexname + N' ON ' + @schemaname + N'.' + @objectname + N' REBUILD';
--IF @partitioncount > 1
--SET @command = @command + N' PARTITION=' + CAST(@partitionnum AS nvarchar(10));
--print 'Rebuilding/Reogranizing index ' + @indexname
--EXEC (@command);
--PRINT N'Executed: ' + @command;
--END
---- Close and deallocate the cursor.
--CLOSE partitions;
--DEALLOCATE partitions;
---- Drop the temporary table.
--DROP TABLE #work_to_do;
--print 'Updating usage'
--DBCC UPDATEUSAGE (0)
--print 'Shrinking database'
--declare @shcmd nvarchar(255) = 'DBCC SHRINKDATABASE (' + DB_NAME() + ',10)'
--exec (@shcmd)
 
END


Tags: