When you are performing many DML operations and makes tempdb contesnion on it, You may get wait_resource of 2.1.103
check perfomance counter:
- Temp Tables Creation Rate
- Temp Tables For Destruction
When you are performing many DML operations and makes tempdb contesnion on it, You may get wait_resource of 2.1.1 and 2.1.3
Sys,Dm_os_waiting_task – resource like (2:1:%) where % 1:PFS and 3 GAMs contention issue and 103 is DDL contension on Create/Drop temp table on sysmultiobjrefs objects.
for 2.1.1 and 2.1.3:
>> check tempdb is on seperate LUN (drives)
>>Enable the trace flag -T1118
>>check the Sort and DML queries on temp objects and try to tune it.
>> enable trace flag -T1118
>> try not us create temp objects.
>> try to include indexes on the temp table.
>> apply hotfix
Must read :
you may get an error while trying to failover to another node cluster.
to resolve this issue create an alias on the node with the cluster name and it should work.
will blog more on this later.
You may get an error on log reader agent failed with following error:
Cannot execute as the database principal because the principal “dbo” does not exist, this type of principal cannot be impersonated, or you do not have permission.
reason could be due to some activity the owner of the publisher database is gone (UNKNOWN) can cause this issue.
change the owner of the job usinga
EXEC sp_changedbowner ‘sa’
and it should work.
Yes, its true.
Microsoft vice president has announce that new version of sql server 2014, will release on 2014…
do you think it is possible, as not most of us not familiar with 2012 only….
any way…as MS already clear that they will release new version after every 2 years… this announcement is part of it.
you must see this diagram shows features and sql version implement.
[Must Read ]Whitepaper by expert – Kalen Delney on SQL Server 2014 In-Memory OLTP (“Hekaton”) : http://sqlblog.com/blogs/kalen_delaney/archive/2013/06/05/hekaton-whitepaper.aspx
below are some good links related to SQL server 2014.
Hey Everyone….. from quite some time I have not written Blogs due to some personal and transiting from my old company … yeah If you read my last year good bye Blog I have indicated about my interest for changing the company took a year to do so…:)… yeah Now from this year I have joined a new company…. now having more than 12 Years working… and more than a decade with Sql Server… still Learning…. learning never ends:) loving it.
okay remembering what I have achieved year 2012 and how much achieved.
>> As committed that I will learn Denali and continue writing a Blog a day – have done a Denali Blog a day series on May 2012 , if you want consolidated pdf for that please comment or send an email to me.
continue learning Denali more stuff its getting interesting.
>> learning other stuff as well.
Hope I will continue to learn more about Sql server.
Thanks again all expert who share their valuable information on internet.
Thanks for Reading.
Continue on What’s new in Denali and Memory enhancement, come across another great article by SQLCAT and Glenn Berry a new feature of Sql server 2012 in association with windows server 2012 called Memory Error Recovery in SQL Server 2012, Wanted to blog about it earlier but forget due to some other person stuff.
If you install sql server 2012 with windows server 2012 Enterprise edition will be a great package and have all great features for Denali includes Memory Error Recovery. Generally memory pages wont corrupt that easy but if you have very critical enterprise system and you don’t want to take chance. Could be due to some hardware failure or any other reason data on the memory could be corrupt. At this time for such cases you do not have a solution.
How it works:
If your data on memory BPool is corrupt or lost but that has been committed into disk with this option you could able to fetch the data from disk into memory and correct the memory data cache.
- Windows server 2012 -Enterprise
- Sql Server 2012 -Enterprise
- Data should be committed to disk.
- Hardware/Software should be compatible.
DBCC Cleaning CACHE – Memory
I wanted to write this blog for quite some time, here you go, as you know at sql everything is on memory, memory information retrieve quickly compare to Disk. So if things are not in memory processor has to fetch that information from disk into memory, as disk is quite slower than memory its time consuming. Ok I am not going in deep on this.
So, if we want to free up the CACHE due to some reason following are the DBCC commands…
(Never run this command on production(run with your own performance risk)
This DBCC commands are just to let everyone know what they are briefly.
- DBCC DROPCLEANBUFFERS – Removes all clean buffers from the buffer pool … run checkpoint and run this command. clean memory (as good as sql restart).
- DBCC FREESYSTEMCACHE
–Clears all the entries from system cache(Releases all unused cache entries from all caches).
3. DBCC FREEPROCCACHE – Clear Procedure cache, so that next time SP runs with re-compile.
4. DBCC FREESESSIONCACHE –Clear distributed queries, queries which uses OLEDB –can be connect to other RDBMS as well – Flush the distributed query connection cache.
5. DBCC FLUSHPROCINDB(<db_id>); –procedure cache for particular database similar to DBCC FREEPROCCACHE.
if Auto_Close option is set, cache of that database will free automatically.
*Never run this command on production(run with your own performance risk)
List of DBCC Commands:
Error: ACCESS DENIED on installation
Generally you would get error on:
>> During installation
>> when try to start the sql services.
>> After installation when try to access instance.
You would get “ACCESS DENIED” error.
To resolve this issue, you have to provide access at Secpol.msc and regedit to service account your sql server is running with:
Go to Security Settings-> Local Policies ->User Rights Assignment.
your sql service account should be included in following:
- HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\WinHttpAutoProxySvc -SQL Server Service should have FULL CONTROL on this key.
Restart the machine.
Most of the time it should work.
Will try to write more on securities later
Normalization & Denormalization
Someone asked me about this basic thing which we generally do not think much, so realize that should write a short blog on it.
For any database the way data is stored using different data models of database design is called normalization. More normalize the data would be eliminate redundancy (filtered).
There are different types of normal forms, depending upon how much filter are you looking for.
First Normal Form:
This is the basic one, which makes database row uniqueness by including PRIMARY KEY.
Second Normal Form:
Here it works on columns and introduces FOREIGN KEY.
Third Normal Form:
Including 1st and 2nd Normal form, it makes sure that only related data would be exists(depends upon KEY).
- The more normalize the data more tables(objects) and relationship between them requires which is good for data distribution
- including indexes on them will fast up the data retrievals.
- this is good for OLTP environment which has more data movements.
- Requires more Joins to retrieve data.
Denormalization: opposite to normalization, it is good for OLAP system where less writes more reads.