Tuesday, March 27, 2012
advice on hard drive set up for 2005 sql server 64 bit
I built a server and I need some advice on reconfiguring the hard drives for
better sql server performance. Here are my current specs
2 Xeon 5160 Dual Core Hypertheaded Processors
8 gig of fd ram
Windows 2003 Server 64 bit
Sql Server 2005 64 bit
2 500 gig Sata II hard drives set up as Raid 1 at the hard ware level. They
are partitioned as 15 gig for the OS and the rest for my development
environment.
In my development testing I am working with databases starting at 30 gig and
can run upwards of 100 gig. After initial set up and the first round of
testing I quickly learned that I needed to move my tempDB onto the larger
partition. I also learned that I should move it on to a fast disk that isn't
part of the raid because it isn't necessary to backup. Additionally I read
that there is no need to put the system swap file on the raid and for better
performance it should be on a separate faster disk.
This is how I am thinking of reconfiguring the system
2 36 gig 10,000 rpm SATA I drives mirrored under raid 1 used solely for the
OS
1 174 gig 10,000 rpm SATA I drive used for the tempDB & system swap space
and the remaining 2 500 gig drives again set up on raid 1 for my development
database & environments.
does that sound better than my original set up ? if not I am open to
suggestions.
Also if I did move my OS to the new 36 gig would I need to do a complete
reinstall or could I do a back up and restore? I think it would be possible
using partition magic or a similiar tool.
thanks alot
Sounds improved. You may also want to put tempdb on two filegroups, one on
36GB and one on 174GB drives. I think the optimum for 2005 is 1 FG per CPU
(not counting hyperthreads)
TheSQLGuru
President
Indicium Resources, Inc.
"Coaster" <Coaster@.Coaster.net> wrote in message
news:OKqVNMxpHHA.960@.TK2MSFTNGP03.phx.gbl...
> Hi
> I built a server and I need some advice on reconfiguring the hard drives
> for better sql server performance. Here are my current specs
> 2 Xeon 5160 Dual Core Hypertheaded Processors
> 8 gig of fd ram
> Windows 2003 Server 64 bit
> Sql Server 2005 64 bit
> 2 500 gig Sata II hard drives set up as Raid 1 at the hard ware level.
> They are partitioned as 15 gig for the OS and the rest for my development
> environment.
> In my development testing I am working with databases starting at 30 gig
> and can run upwards of 100 gig. After initial set up and the first round
> of testing I quickly learned that I needed to move my tempDB onto the
> larger partition. I also learned that I should move it on to a fast disk
> that isn't part of the raid because it isn't necessary to backup.
> Additionally I read that there is no need to put the system swap file on
> the raid and for better performance it should be on a separate faster
> disk.
> This is how I am thinking of reconfiguring the system
> 2 36 gig 10,000 rpm SATA I drives mirrored under raid 1 used solely for
> the OS
> 1 174 gig 10,000 rpm SATA I drive used for the tempDB & system swap space
> and the remaining 2 500 gig drives again set up on raid 1 for my
> development database & environments.
> does that sound better than my original set up ? if not I am open to
> suggestions.
> Also if I did move my OS to the new 36 gig would I need to do a complete
> reinstall or could I do a back up and restore? I think it would be
> possible using partition magic or a similiar tool.
> thanks alot
>
|||I can guess that would be true, each cpu having it's own tempdb space to
use.
Thanks I'll do that
"TheSQLGuru" <kgboles@.earthlink.net> wrote in message
news:uC3biexpHHA.1144@.TK2MSFTNGP02.phx.gbl...
> Sounds improved. You may also want to put tempdb on two filegroups, one
> on 36GB and one on 174GB drives. I think the optimum for 2005 is 1 FG per
> CPU (not counting hyperthreads)
> --
> TheSQLGuru
> President
> Indicium Resources, Inc.
> "Coaster" <Coaster@.Coaster.net> wrote in message
> news:OKqVNMxpHHA.960@.TK2MSFTNGP03.phx.gbl...
>
advice on hard drive set up for 2005 sql server 64 bit
I built a server and I need some advice on reconfiguring the hard drives for
better sql server performance. Here are my current specs
2 Xeon 5160 Dual Core Hypertheaded Processors
8 gig of fd ram
Windows 2003 Server 64 bit
Sql Server 2005 64 bit
2 500 gig Sata II hard drives set up as Raid 1 at the hard ware level. They
are partitioned as 15 gig for the OS and the rest for my development
environment.
In my development testing I am working with databases starting at 30 gig and
can run upwards of 100 gig. After initial set up and the first round of
testing I quickly learned that I needed to move my tempDB onto the larger
partition. I also learned that I should move it on to a fast disk that isn't
part of the raid because it isn't necessary to backup. Additionally I read
that there is no need to put the system swap file on the raid and for better
performance it should be on a separate faster disk.
This is how I am thinking of reconfiguring the system
2 36 gig 10,000 rpm SATA I drives mirrored under raid 1 used solely for the
OS
1 174 gig 10,000 rpm SATA I drive used for the tempDB & system swap space
and the remaining 2 500 gig drives again set up on raid 1 for my development
database & environments.
does that sound better than my original set up ? if not I am open to
suggestions.
Also if I did move my OS to the new 36 gig would I need to do a complete
reinstall or could I do a back up and restore? I think it would be possible
using partition magic or a similiar tool.
thanks alotSounds improved. You may also want to put tempdb on two filegroups, one on
36GB and one on 174GB drives. I think the optimum for 2005 is 1 FG per CPU
(not counting hyperthreads)
TheSQLGuru
President
Indicium Resources, Inc.
"Coaster" <Coaster@.Coaster.net> wrote in message
news:OKqVNMxpHHA.960@.TK2MSFTNGP03.phx.gbl...
> Hi
> I built a server and I need some advice on reconfiguring the hard drives
> for better sql server performance. Here are my current specs
> 2 Xeon 5160 Dual Core Hypertheaded Processors
> 8 gig of fd ram
> Windows 2003 Server 64 bit
> Sql Server 2005 64 bit
> 2 500 gig Sata II hard drives set up as Raid 1 at the hard ware level.
> They are partitioned as 15 gig for the OS and the rest for my development
> environment.
> In my development testing I am working with databases starting at 30 gig
> and can run upwards of 100 gig. After initial set up and the first round
> of testing I quickly learned that I needed to move my tempDB onto the
> larger partition. I also learned that I should move it on to a fast disk
> that isn't part of the raid because it isn't necessary to backup.
> Additionally I read that there is no need to put the system swap file on
> the raid and for better performance it should be on a separate faster
> disk.
> This is how I am thinking of reconfiguring the system
> 2 36 gig 10,000 rpm SATA I drives mirrored under raid 1 used solely for
> the OS
> 1 174 gig 10,000 rpm SATA I drive used for the tempDB & system swap space
> and the remaining 2 500 gig drives again set up on raid 1 for my
> development database & environments.
> does that sound better than my original set up ? if not I am open to
> suggestions.
> Also if I did move my OS to the new 36 gig would I need to do a complete
> reinstall or could I do a back up and restore? I think it would be
> possible using partition magic or a similiar tool.
> thanks alot
>|||I can guess that would be true, each cpu having it's own tempdb space to
use.
Thanks I'll do that
"TheSQLGuru" <kgboles@.earthlink.net> wrote in message
news:uC3biexpHHA.1144@.TK2MSFTNGP02.phx.gbl...
> Sounds improved. You may also want to put tempdb on two filegroups, one
> on 36GB and one on 174GB drives. I think the optimum for 2005 is 1 FG per
> CPU (not counting hyperthreads)
> --
> TheSQLGuru
> President
> Indicium Resources, Inc.
> "Coaster" <Coaster@.Coaster.net> wrote in message
> news:OKqVNMxpHHA.960@.TK2MSFTNGP03.phx.gbl...
>sql
advice on hard drive set up for 2005 sql server 64 bit
I built a server and I need some advice on reconfiguring the hard drives for
better sql server performance. Here are my current specs
2 Xeon 5160 Dual Core Hypertheaded Processors
8 gig of fd ram
Windows 2003 Server 64 bit
Sql Server 2005 64 bit
2 500 gig Sata II hard drives set up as Raid 1 at the hard ware level. They
are partitioned as 15 gig for the OS and the rest for my development
environment.
In my development testing I am working with databases starting at 30 gig and
can run upwards of 100 gig. After initial set up and the first round of
testing I quickly learned that I needed to move my tempDB onto the larger
partition. I also learned that I should move it on to a fast disk that isn't
part of the raid because it isn't necessary to backup. Additionally I read
that there is no need to put the system swap file on the raid and for better
performance it should be on a separate faster disk.
This is how I am thinking of reconfiguring the system
2 36 gig 10,000 rpm SATA I drives mirrored under raid 1 used solely for the
OS
1 174 gig 10,000 rpm SATA I drive used for the tempDB & system swap space
and the remaining 2 500 gig drives again set up on raid 1 for my development
database & environments.
does that sound better than my original set up ? if not I am open to
suggestions.
Also if I did move my OS to the new 36 gig would I need to do a complete
reinstall or could I do a back up and restore? I think it would be possible
using partition magic or a similiar tool.
thanks alotSounds improved. You may also want to put tempdb on two filegroups, one on
36GB and one on 174GB drives. I think the optimum for 2005 is 1 FG per CPU
(not counting hyperthreads)
--
TheSQLGuru
President
Indicium Resources, Inc.
"Coaster" <Coaster@.Coaster.net> wrote in message
news:OKqVNMxpHHA.960@.TK2MSFTNGP03.phx.gbl...
> Hi
> I built a server and I need some advice on reconfiguring the hard drives
> for better sql server performance. Here are my current specs
> 2 Xeon 5160 Dual Core Hypertheaded Processors
> 8 gig of fd ram
> Windows 2003 Server 64 bit
> Sql Server 2005 64 bit
> 2 500 gig Sata II hard drives set up as Raid 1 at the hard ware level.
> They are partitioned as 15 gig for the OS and the rest for my development
> environment.
> In my development testing I am working with databases starting at 30 gig
> and can run upwards of 100 gig. After initial set up and the first round
> of testing I quickly learned that I needed to move my tempDB onto the
> larger partition. I also learned that I should move it on to a fast disk
> that isn't part of the raid because it isn't necessary to backup.
> Additionally I read that there is no need to put the system swap file on
> the raid and for better performance it should be on a separate faster
> disk.
> This is how I am thinking of reconfiguring the system
> 2 36 gig 10,000 rpm SATA I drives mirrored under raid 1 used solely for
> the OS
> 1 174 gig 10,000 rpm SATA I drive used for the tempDB & system swap space
> and the remaining 2 500 gig drives again set up on raid 1 for my
> development database & environments.
> does that sound better than my original set up ? if not I am open to
> suggestions.
> Also if I did move my OS to the new 36 gig would I need to do a complete
> reinstall or could I do a back up and restore? I think it would be
> possible using partition magic or a similiar tool.
> thanks alot
>|||I can guess that would be true, each cpu having it's own tempdb space to
use.
Thanks I'll do that
"TheSQLGuru" <kgboles@.earthlink.net> wrote in message
news:uC3biexpHHA.1144@.TK2MSFTNGP02.phx.gbl...
> Sounds improved. You may also want to put tempdb on two filegroups, one
> on 36GB and one on 174GB drives. I think the optimum for 2005 is 1 FG per
> CPU (not counting hyperthreads)
> --
> TheSQLGuru
> President
> Indicium Resources, Inc.
> "Coaster" <Coaster@.Coaster.net> wrote in message
> news:OKqVNMxpHHA.960@.TK2MSFTNGP03.phx.gbl...
>> Hi
>> I built a server and I need some advice on reconfiguring the hard drives
>> for better sql server performance. Here are my current specs
>> 2 Xeon 5160 Dual Core Hypertheaded Processors
>> 8 gig of fd ram
>> Windows 2003 Server 64 bit
>> Sql Server 2005 64 bit
>> 2 500 gig Sata II hard drives set up as Raid 1 at the hard ware level.
>> They are partitioned as 15 gig for the OS and the rest for my development
>> environment.
>> In my development testing I am working with databases starting at 30 gig
>> and can run upwards of 100 gig. After initial set up and the first round
>> of testing I quickly learned that I needed to move my tempDB onto the
>> larger partition. I also learned that I should move it on to a fast disk
>> that isn't part of the raid because it isn't necessary to backup.
>> Additionally I read that there is no need to put the system swap file on
>> the raid and for better performance it should be on a separate faster
>> disk.
>> This is how I am thinking of reconfiguring the system
>> 2 36 gig 10,000 rpm SATA I drives mirrored under raid 1 used solely for
>> the OS
>> 1 174 gig 10,000 rpm SATA I drive used for the tempDB & system swap
>> space
>> and the remaining 2 500 gig drives again set up on raid 1 for my
>> development database & environments.
>> does that sound better than my original set up ? if not I am open to
>> suggestions.
>> Also if I did move my OS to the new 36 gig would I need to do a complete
>> reinstall or could I do a back up and restore? I think it would be
>> possible using partition magic or a similiar tool.
>> thanks alot
>
Friday, February 24, 2012
ADO.NET or OLEDB connection/recordset?
My package needs to be a High Performance (target: 150,000 rows in 30 minutes) ETL solution. We are using all MS technologies - SSIS, SQL 2005, BIDS, etc. I need to loop the recordset executed by a Stored Proc in a Execute SQL Task in a Source Script Component.
If I use an ADO.NET Connection Manager, here is the code in the Source Script Component Public Overrides Sub CreateNewOutputRows()
Code 1
Dim sqlAdapter As New SqlDataAdapter
Dim dataRow As Data.DataRow
Dim ds As DataSet = CType(Me.Variables.rsSomeResultset, DataSet)
sqlAdapter.Fill(ds)
Iget: Error: System.InvalidCastException: Unable to cast object of type 'System.Object' to type 'System.Data.DataSet'.
Code 2
Dim oledbAdapter As New OleDb.OleDbDataAdapter
Dim dataTable As DataTable
oledbAdapter.Fill(dataTable, Me.Variables.rsSomeResultset)
Error: System.ArgumentException: Object is not an ADODB.RecordSet or an ADODB.Record. Parameter name: adodb
It works all right when I use an OLEDB Connection Manager with the second code sample.
Question: In order to extract the maximum performance, wouldn't it be preferred to use ADO.NET with SqlClient Provider in an all SQL Server 2005 environment? Or will an OLEDB Connection provide comparable or equal performance?
If so, what code can I use? Since the recordset returned by the Stored Proc (in the Execute SQL Task) can only be captured in a System.Object variable and you can only use the overload of the Fill() method of the OleDbDataAdapter to accept an ADO Recordset or Record object.
There was a post recently that compared some of the connection types, but thanks to the lovely search functionality, I can't find it. If anyone else has it, please post it to this thread.
In general, I don't think you are going to see a significant performance difference between ADO.NET and OLEDB against SQL Server.
That being said, if you want to do some further research into the problem, try adding a message box to your script to display the type of the variable.
Code Snippet
System.Windows.Forms.MessageBox.Show(Me.Variables.rsSomeResultset.ToString())|||You Code 1 snippet doesn't look right for ADO.NET. You shouldn't be trying to call sqlAdapter.Fill(ds). Try Dim dataTable as DataTable = ds.Tables(0) instead. I'd guess that OLE DB would be faster. Please post your findings.
|||
Do you mean this discussion?
http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1985034&SiteID=1
ADO.NET or OLEDB connection/recordset?
My package needs to be a High Performance (target: 150,000 rows in 30 minutes) ETL solution. We are using all MS technologies - SSIS, SQL 2005, BIDS, etc. I need to loop the recordset executed by a Stored Proc in a Execute SQL Task in a Source Script Component.
If I use an ADO.NET Connection Manager, here is the code in the Source Script Component Public Overrides Sub CreateNewOutputRows()
Code 1
Dim sqlAdapter As New SqlDataAdapter
Dim dataRow As Data.DataRow
Dim ds As DataSet = CType(Me.Variables.rsSomeResultset, DataSet)
sqlAdapter.Fill(ds)
Iget: Error: System.InvalidCastException: Unable to cast object of type 'System.Object' to type 'System.Data.DataSet'.
Code 2
Dim oledbAdapter As New OleDb.OleDbDataAdapter
Dim dataTable As DataTable
oledbAdapter.Fill(dataTable, Me.Variables.rsSomeResultset)
Error: System.ArgumentException: Object is not an ADODB.RecordSet or an ADODB.Record. Parameter name: adodb
It works all right when I use an OLEDB Connection Manager with the second code sample.
Question: In order to extract the maximum performance, wouldn't it be preferred to use ADO.NET with SqlClient Provider in an all SQL Server 2005 environment? Or will an OLEDB Connection provide comparable or equal performance?
If so, what code can I use? Since the recordset returned by the Stored Proc (in the Execute SQL Task) can only be captured in a System.Object variable and you can only use the overload of the Fill() method of the OleDbDataAdapter to accept an ADO Recordset or Record object.
There was a post recently that compared some of the connection types, but thanks to the lovely search functionality, I can't find it. If anyone else has it, please post it to this thread.
In general, I don't think you are going to see a significant performance difference between ADO.NET and OLEDB against SQL Server.
That being said, if you want to do some further research into the problem, try adding a message box to your script to display the type of the variable.
Code Snippet
System.Windows.Forms.MessageBox.Show(Me.Variables.rsSomeResultset.ToString())|||You Code 1 snippet doesn't look right for ADO.NET. You shouldn't be trying to call sqlAdapter.Fill(ds). Try Dim dataTable as DataTable = ds.Tables(0) instead. I'd guess that OLE DB would be faster. Please post your findings.
|||
Do you mean this discussion?
http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1985034&SiteID=1
ADO.Net 2.0 Bulk copy vs SSIS
news:B546BA27-E6A2-4477-926E-A37E3F6A635D@.microsoft.com...
> For simply loading a big amount of data. Is the the performance similar?
Yes, the performance is similar. You should choose between them based on
other factors.
David
Sunday, February 19, 2012
ADO Performance question
I am writing a code which is actually performing multiple inserts on a table
.
Currently I am combining all my insert queries and passing as command text t
o ADO command object.
There is one more way by opening a recordset and call AddNew on recordset mu
ltiple times and then batch update.
But I am not sure which method is better than other and which approach shoul
d I use.
Thanks in advance.
PushkarHi
Why not just calling a stored procedures that does the job?
"Pushkar" <pushkartiwari@.gmail.com> wrote in message news:uA7uMRk2FHA.472@.TK
2MSFTNGP15.phx.gbl...
Hi,
I am writing a code which is actually performing multiple inserts on a table
.
Currently I am combining all my insert queries and passing as command text t
o ADO command object.
There is one more way by opening a recordset and call AddNew on recordset mu
ltiple times and then batch update.
But I am not sure which method is better than other and which approach shoul
d I use.
Thanks in advance.
Pushkar|||If you are inserting 100,000+ records, then calling a Recordset.AddNew for e
ach record will be very slow.
I once wrote an ETL application in VB6.0/ADO2.5 that performed some rather c
omplex data transformations on millions of records per day that were derived
from the mainframe and MS Excel. I would run the app on the local workstati
on, appending each record to a tab delimited text file. When this step compl
eted, I would then bulk copy (BCP.EXE) the resulting file into SQL Server. T
he difference in runtime per batch was a couple of hours vs. 20 minutes. Ano
ther advantage to this is you can run the application on multiple PCs with n
o load on the database server except for the bulk copy operations.
You may also want to try posting to microsoft.public.data.ado
"Pushkar" <pushkartiwari@.gmail.com> wrote in message news:uA7uMRk2FHA.472@.TK
2MSFTNGP15.phx.gbl...
Hi,
I am writing a code which is actually performing multiple inserts on a table
.
Currently I am combining all my insert queries and passing as command text t
o ADO command object.
There is one more way by opening a recordset and call AddNew on recordset mu
ltiple times and then batch update.
But I am not sure which method is better than other and which approach shoul
d I use.
Thanks in advance.
Pushkar|||Batching is generally faster. But you are moving a lot of data across
the network. As Uri mentions, why not use a stored procedure instead
(even if this implies a few calls to the procedure). If you are dealing
with thousands of rows, this might perform better using a bulk insert or
running locally on the server.
David Gugick
Quest Software
www.quest.com
"Pushkar" <pushkartiwari@.gmail.com> wrote in message
news:uA7uMRk2FHA.472@.TK2MSFTNGP15.phx.gbl...
Hi,
I am writing a code which is actually performing multiple inserts on a
table.
Currently I am combining all my insert queries and passing as command
text to ADO command object.
There is one more way by opening a recordset and call AddNew on
recordset multiple times and then batch update.
But I am not sure which method is better than other and which approach
should I use.
Thanks in advance.
Pushkar
Thursday, February 9, 2012
Admin Performance Issues, Large amount of DBs
Cheers!Use scripts ?
-PatP|||If you can run utilities on server itself, register server as "(LOCAL)" which is local named pipes. Also make sure "Enable shared memory protocol" is checked in Client Network Utility on server.
Otherwise if you can not use server console to run utilities make sure you use TCP/IP protocol.
Hans.