Showing posts with label log. Show all posts
Showing posts with label log. Show all posts

Sunday, March 11, 2012

Advanced Log Shipping

We are researching whether we can accomplish the following goal with
log shipping. We have a reporting database that is mostly read-only
that we would like to migrate to log shipping.
Our environment consists of two database servers, one is our
operational database server and the other is a reporting database
server. All the data writes go to our operational database.
Currently, we replicate the operational database to the reporting
database every evening. We intentional do this once a day so that we
have a 24 hour window to correct any data entry issues if they should
occur.
Log shipping sounds easy enough to setup, but here is where it gets
complicated for us. Our reporting database is used via our web portal
application. Our users have the ability to make changes during the day
via the portal. Some of these changes cause writes to both the
reporting database as well as the operational database. The writes to
the reporting database let the users get immediate changes, and the
writes to the operational database ensure the data will be updated upon
the next nightly update.
1) Will these intermittent writes to the reporting database server
prevent a log shipping transaction log restore from completing
successfully?
2) If we do transaction log backups once an hour, I assume we can save
them all and then replay all the tlog backups at one time in the middle
of the night.
We tried to setup replication at one point, but our database schema
would not replicate to a second server and we have not had the
resources to try to resolve the replication issues.
Thank You,
Kevin
Kevin,
log shipping restores the logs using NORECOVERY or STANDBY, both of which
will prevent the editing of the reporting data.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
|||Log shipping target databases are locked read-only. You cannot update a
log-shipped target database. IF you bring it live then you cannot apply any
further transaction logs.
Yes, you can "save up" transaction logs and apply them in sequence.
Geoff N. Hiten
Senior Database Administrator
Microsoft SQL Server MVP
"kghammond" <kghammond@.nrscorp.com> wrote in message
news:1133799275.397007.188420@.g43g2000cwa.googlegr oups.com...
> We are researching whether we can accomplish the following goal with
> log shipping. We have a reporting database that is mostly read-only
> that we would like to migrate to log shipping.
> Our environment consists of two database servers, one is our
> operational database server and the other is a reporting database
> server. All the data writes go to our operational database.
> Currently, we replicate the operational database to the reporting
> database every evening. We intentional do this once a day so that we
> have a 24 hour window to correct any data entry issues if they should
> occur.
> Log shipping sounds easy enough to setup, but here is where it gets
> complicated for us. Our reporting database is used via our web portal
> application. Our users have the ability to make changes during the day
> via the portal. Some of these changes cause writes to both the
> reporting database as well as the operational database. The writes to
> the reporting database let the users get immediate changes, and the
> writes to the operational database ensure the data will be updated upon
> the next nightly update.
> 1) Will these intermittent writes to the reporting database server
> prevent a log shipping transaction log restore from completing
> successfully?
> 2) If we do transaction log backups once an hour, I assume we can save
> them all and then replay all the tlog backups at one time in the middle
> of the night.
> We tried to setup replication at one point, but our database schema
> would not replicate to a second server and we have not had the
> resources to try to resolve the replication issues.
> Thank You,
> Kevin
>
|||The business case that we are trying to resolve is that we are moving
our reporting database server to a co-location facility. In the past
we had high speed LAN access between our database servers so nightly
backup and restores of our database where not an issue. Our database
is currently about 7GB and it is growing at about 25% per year. Our
WAN connection is a standard point to point T1. 7GB is too much data
to move across the WAN on a nightly basis. LiteSpeed will shrink the
database to 1.5GB, but still that is very large compared to 35MB
transaction logs.
Is it possible to make a copy of the log shipping target database on
the remote site? In essence could we use the log shipping target
database to stage a production database across the WAN.
Operational Database --> Log Shipping --> Staging Database (RO) -->
Backup/detach/copy/etc --> Reporting Database (RW)
Thank You for the quick feedback,
Kevin
|||Upgrade to SQL Server 2005 and use database snapshots - exactly what they
are designed for.
You can't detach the database and copy, nor can you stop SQL Server and copy
the files and use those as inputs to sp_attach_db.
Tony.
Tony Rogerson
SQL Server MVP
http://sqlserverfaq.com - free video tutorials
"kghammond" <kghammond@.nrscorp.com> wrote in message
news:1133800712.975757.55690@.f14g2000cwb.googlegro ups.com...
> The business case that we are trying to resolve is that we are moving
> our reporting database server to a co-location facility. In the past
> we had high speed LAN access between our database servers so nightly
> backup and restores of our database where not an issue. Our database
> is currently about 7GB and it is growing at about 25% per year. Our
> WAN connection is a standard point to point T1. 7GB is too much data
> to move across the WAN on a nightly basis. LiteSpeed will shrink the
> database to 1.5GB, but still that is very large compared to 35MB
> transaction logs.
> Is it possible to make a copy of the log shipping target database on
> the remote site? In essence could we use the log shipping target
> database to stage a production database across the WAN.
> Operational Database --> Log Shipping --> Staging Database (RO) -->
> Backup/detach/copy/etc --> Reporting Database (RW)
> Thank You for the quick feedback,
> Kevin
>
|||Our final solution is to utilize Double Take. We plan to replicate our
..mdf and .ldf across the wan using double take. Then once a night we
will pause Double Take and copy the .mdf and .ldf out of the replica.
We will then mount that as our reporting database.
So far all seems well. Scripting out an automated way to attach the
database and verify that it is not corrupt and then backout to the
previous database if necessary is proving to be a little tricky. If
only DTS allowed better flow control...
SQL 2005 will be on its way soon, but not soon enough
Thank you all for your input,
Kevin
|||Hello,
Thanks for let me know the current status. If you have any questions or
concerns in future, feel free to post back!
Sophie Guo
Microsoft Online Partner Support
Get Secure! - www.microsoft.com/security
================================================== ===
When responding to posts, please "Reply to Group" via your newsreader so
that others may learn and benefit from your issue.
================================================== ===
This posting is provided "AS IS" with no warranties, and confers no rights.

Advanced Log Shipping

We are researching whether we can accomplish the following goal with
log shipping. We have a reporting database that is mostly read-only
that we would like to migrate to log shipping.
Our environment consists of two database servers, one is our
operational database server and the other is a reporting database
server. All the data writes go to our operational database.
Currently, we replicate the operational database to the reporting
database every evening. We intentional do this once a day so that we
have a 24 hour window to correct any data entry issues if they should
occur.
Log shipping sounds easy enough to setup, but here is where it gets
complicated for us. Our reporting database is used via our web portal
application. Our users have the ability to make changes during the day
via the portal. Some of these changes cause writes to both the
reporting database as well as the operational database. The writes to
the reporting database let the users get immediate changes, and the
writes to the operational database ensure the data will be updated upon
the next nightly update.
1) Will these intermittent writes to the reporting database server
prevent a log shipping transaction log restore from completing
successfully?
2) If we do transaction log backups once an hour, I assume we can save
them all and then replay all the tlog backups at one time in the middle
of the night.
We tried to setup replication at one point, but our database schema
would not replicate to a second server and we have not had the
resources to try to resolve the replication issues.
Thank You,
KevinKevin,
log shipping restores the logs using NORECOVERY or STANDBY, both of which
will prevent the editing of the reporting data.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)|||Log shipping target databases are locked read-only. You cannot update a
log-shipped target database. IF you bring it live then you cannot apply any
further transaction logs.
Yes, you can "save up" transaction logs and apply them in sequence.
--
Geoff N. Hiten
Senior Database Administrator
Microsoft SQL Server MVP
"kghammond" <kghammond@.nrscorp.com> wrote in message
news:1133799275.397007.188420@.g43g2000cwa.googlegroups.com...
> We are researching whether we can accomplish the following goal with
> log shipping. We have a reporting database that is mostly read-only
> that we would like to migrate to log shipping.
> Our environment consists of two database servers, one is our
> operational database server and the other is a reporting database
> server. All the data writes go to our operational database.
> Currently, we replicate the operational database to the reporting
> database every evening. We intentional do this once a day so that we
> have a 24 hour window to correct any data entry issues if they should
> occur.
> Log shipping sounds easy enough to setup, but here is where it gets
> complicated for us. Our reporting database is used via our web portal
> application. Our users have the ability to make changes during the day
> via the portal. Some of these changes cause writes to both the
> reporting database as well as the operational database. The writes to
> the reporting database let the users get immediate changes, and the
> writes to the operational database ensure the data will be updated upon
> the next nightly update.
> 1) Will these intermittent writes to the reporting database server
> prevent a log shipping transaction log restore from completing
> successfully?
> 2) If we do transaction log backups once an hour, I assume we can save
> them all and then replay all the tlog backups at one time in the middle
> of the night.
> We tried to setup replication at one point, but our database schema
> would not replicate to a second server and we have not had the
> resources to try to resolve the replication issues.
> Thank You,
> Kevin
>|||The business case that we are trying to resolve is that we are moving
our reporting database server to a co-location facility. In the past
we had high speed LAN access between our database servers so nightly
backup and restores of our database where not an issue. Our database
is currently about 7GB and it is growing at about 25% per year. Our
WAN connection is a standard point to point T1. 7GB is too much data
to move across the WAN on a nightly basis. LiteSpeed will shrink the
database to 1.5GB, but still that is very large compared to 35MB
transaction logs.
Is it possible to make a copy of the log shipping target database on
the remote site? In essence could we use the log shipping target
database to stage a production database across the WAN.
Operational Database --> Log Shipping --> Staging Database (RO) -->
Backup/detach/copy/etc --> Reporting Database (RW)
Thank You for the quick feedback,
Kevin|||Upgrade to SQL Server 2005 and use database snapshots - exactly what they
are designed for.
You can't detach the database and copy, nor can you stop SQL Server and copy
the files and use those as inputs to sp_attach_db.
Tony.
--
Tony Rogerson
SQL Server MVP
http://sqlserverfaq.com - free video tutorials
"kghammond" <kghammond@.nrscorp.com> wrote in message
news:1133800712.975757.55690@.f14g2000cwb.googlegroups.com...
> The business case that we are trying to resolve is that we are moving
> our reporting database server to a co-location facility. In the past
> we had high speed LAN access between our database servers so nightly
> backup and restores of our database where not an issue. Our database
> is currently about 7GB and it is growing at about 25% per year. Our
> WAN connection is a standard point to point T1. 7GB is too much data
> to move across the WAN on a nightly basis. LiteSpeed will shrink the
> database to 1.5GB, but still that is very large compared to 35MB
> transaction logs.
> Is it possible to make a copy of the log shipping target database on
> the remote site? In essence could we use the log shipping target
> database to stage a production database across the WAN.
> Operational Database --> Log Shipping --> Staging Database (RO) -->
> Backup/detach/copy/etc --> Reporting Database (RW)
> Thank You for the quick feedback,
> Kevin
>|||Our final solution is to utilize Double Take. We plan to replicate our
.mdf and .ldf across the wan using double take. Then once a night we
will pause Double Take and copy the .mdf and .ldf out of the replica.
We will then mount that as our reporting database.
So far all seems well. Scripting out an automated way to attach the
database and verify that it is not corrupt and then backout to the
previous database if necessary is proving to be a little tricky. If
only DTS allowed better flow control...
SQL 2005 will be on its way soon, but not soon enough :(
Thank you all for your input,
Kevin|||Hello,
Thanks for let me know the current status. If you have any questions or
concerns in future, feel free to post back!
Sophie Guo
Microsoft Online Partner Support
Get Secure! - www.microsoft.com/security
=====================================================When responding to posts, please "Reply to Group" via your newsreader so
that others may learn and benefit from your issue.
=====================================================This posting is provided "AS IS" with no warranties, and confers no rights.

Advanced Log Shipping

We are researching whether we can accomplish the following goal with
log shipping. We have a reporting database that is mostly read-only
that we would like to migrate to log shipping.
Our environment consists of two database servers, one is our
operational database server and the other is a reporting database
server. All the data writes go to our operational database.
Currently, we replicate the operational database to the reporting
database every evening. We intentional do this once a day so that we
have a 24 hour window to correct any data entry issues if they should
occur.
Log shipping sounds easy enough to setup, but here is where it gets
complicated for us. Our reporting database is used via our web portal
application. Our users have the ability to make changes during the day
via the portal. Some of these changes cause writes to both the
reporting database as well as the operational database. The writes to
the reporting database let the users get immediate changes, and the
writes to the operational database ensure the data will be updated upon
the next nightly update.
1) Will these intermittent writes to the reporting database server
prevent a log shipping transaction log restore from completing
successfully?
2) If we do transaction log backups once an hour, I assume we can save
them all and then replay all the tlog backups at one time in the middle
of the night.
We tried to setup replication at one point, but our database schema
would not replicate to a second server and we have not had the
resources to try to resolve the replication issues.
Thank You,
KevinKevin,
log shipping restores the logs using NORECOVERY or STANDBY, both of which
will prevent the editing of the reporting data.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)|||Log shipping target databases are locked read-only. You cannot update a
log-shipped target database. IF you bring it live then you cannot apply any
further transaction logs.
Yes, you can "save up" transaction logs and apply them in sequence.
Geoff N. Hiten
Senior Database Administrator
Microsoft SQL Server MVP
"kghammond" <kghammond@.nrscorp.com> wrote in message
news:1133799275.397007.188420@.g43g2000cwa.googlegroups.com...
> We are researching whether we can accomplish the following goal with
> log shipping. We have a reporting database that is mostly read-only
> that we would like to migrate to log shipping.
> Our environment consists of two database servers, one is our
> operational database server and the other is a reporting database
> server. All the data writes go to our operational database.
> Currently, we replicate the operational database to the reporting
> database every evening. We intentional do this once a day so that we
> have a 24 hour window to correct any data entry issues if they should
> occur.
> Log shipping sounds easy enough to setup, but here is where it gets
> complicated for us. Our reporting database is used via our web portal
> application. Our users have the ability to make changes during the day
> via the portal. Some of these changes cause writes to both the
> reporting database as well as the operational database. The writes to
> the reporting database let the users get immediate changes, and the
> writes to the operational database ensure the data will be updated upon
> the next nightly update.
> 1) Will these intermittent writes to the reporting database server
> prevent a log shipping transaction log restore from completing
> successfully?
> 2) If we do transaction log backups once an hour, I assume we can save
> them all and then replay all the tlog backups at one time in the middle
> of the night.
> We tried to setup replication at one point, but our database schema
> would not replicate to a second server and we have not had the
> resources to try to resolve the replication issues.
> Thank You,
> Kevin
>|||The business case that we are trying to resolve is that we are moving
our reporting database server to a co-location facility. In the past
we had high speed LAN access between our database servers so nightly
backup and restores of our database where not an issue. Our database
is currently about 7GB and it is growing at about 25% per year. Our
WAN connection is a standard point to point T1. 7GB is too much data
to move across the WAN on a nightly basis. LiteSpeed will shrink the
database to 1.5GB, but still that is very large compared to 35MB
transaction logs.
Is it possible to make a copy of the log shipping target database on
the remote site? In essence could we use the log shipping target
database to stage a production database across the WAN.
Operational Database --> Log Shipping --> Staging Database (RO) -->
Backup/detach/copy/etc --> Reporting Database (RW)
Thank You for the quick feedback,
Kevin|||Upgrade to SQL Server 2005 and use database snapshots - exactly what they
are designed for.
You can't detach the database and copy, nor can you stop SQL Server and copy
the files and use those as inputs to sp_attach_db.
Tony.
Tony Rogerson
SQL Server MVP
http://sqlserverfaq.com - free video tutorials
"kghammond" <kghammond@.nrscorp.com> wrote in message
news:1133800712.975757.55690@.f14g2000cwb.googlegroups.com...
> The business case that we are trying to resolve is that we are moving
> our reporting database server to a co-location facility. In the past
> we had high speed LAN access between our database servers so nightly
> backup and restores of our database where not an issue. Our database
> is currently about 7GB and it is growing at about 25% per year. Our
> WAN connection is a standard point to point T1. 7GB is too much data
> to move across the WAN on a nightly basis. LiteSpeed will shrink the
> database to 1.5GB, but still that is very large compared to 35MB
> transaction logs.
> Is it possible to make a copy of the log shipping target database on
> the remote site? In essence could we use the log shipping target
> database to stage a production database across the WAN.
> Operational Database --> Log Shipping --> Staging Database (RO) -->
> Backup/detach/copy/etc --> Reporting Database (RW)
> Thank You for the quick feedback,
> Kevin
>|||Our final solution is to utilize Double Take. We plan to replicate our
.mdf and .ldf across the wan using double take. Then once a night we
will pause Double Take and copy the .mdf and .ldf out of the replica.
We will then mount that as our reporting database.
So far all seems well. Scripting out an automated way to attach the
database and verify that it is not corrupt and then backout to the
previous database if necessary is proving to be a little tricky. If
only DTS allowed better flow control...
SQL 2005 will be on its way soon, but not soon enough
Thank you all for your input,
Kevin|||Hello,
Thanks for let me know the current status. If you have any questions or
concerns in future, feel free to post back!
Sophie Guo
Microsoft Online Partner Support
Get Secure! - www.microsoft.com/security
========================================
=============
When responding to posts, please "Reply to Group" via your newsreader so
that others may learn and benefit from your issue.
========================================
=============
This posting is provided "AS IS" with no warranties, and confers no rights.

Advanced Log Shipping

We are researching whether we can accomplish the following goal with
log shipping. We have a reporting database that is mostly read-only
that we would like to migrate to log shipping.
Our environment consists of two database servers, one is our
operational database server and the other is a reporting database
server. All the data writes go to our operational database.
Currently, we replicate the operational database to the reporting
database every evening. We intentional do this once a day so that we
have a 24 hour window to correct any data entry issues if they should
occur.
Log shipping sounds easy enough to setup, but here is where it gets
complicated for us. Our reporting database is used via our web portal
application. Our users have the ability to make changes during the day
via the portal. Some of these changes cause writes to both the
reporting database as well as the operational database. The writes to
the reporting database let the users get immediate changes, and the
writes to the operational database ensure the data will be updated upon
the next nightly update.
1) Will these intermittent writes to the reporting database server
prevent a log shipping transaction log restore from completing
successfully?
2) If we do transaction log backups once an hour, I assume we can save
them all and then replay all the tlog backups at one time in the middle
of the night.
We tried to setup replication at one point, but our database schema
would not replicate to a second server and we have not had the
resources to try to resolve the replication issues.
Thank You,
Kevin
Kevin,
log shipping restores the logs using NORECOVERY or STANDBY, both of which
will prevent the editing of the reporting data.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
|||Log shipping target databases are locked read-only. You cannot update a
log-shipped target database. IF you bring it live then you cannot apply any
further transaction logs.
Yes, you can "save up" transaction logs and apply them in sequence.
Geoff N. Hiten
Senior Database Administrator
Microsoft SQL Server MVP
"kghammond" <kghammond@.nrscorp.com> wrote in message
news:1133799275.397007.188420@.g43g2000cwa.googlegr oups.com...
> We are researching whether we can accomplish the following goal with
> log shipping. We have a reporting database that is mostly read-only
> that we would like to migrate to log shipping.
> Our environment consists of two database servers, one is our
> operational database server and the other is a reporting database
> server. All the data writes go to our operational database.
> Currently, we replicate the operational database to the reporting
> database every evening. We intentional do this once a day so that we
> have a 24 hour window to correct any data entry issues if they should
> occur.
> Log shipping sounds easy enough to setup, but here is where it gets
> complicated for us. Our reporting database is used via our web portal
> application. Our users have the ability to make changes during the day
> via the portal. Some of these changes cause writes to both the
> reporting database as well as the operational database. The writes to
> the reporting database let the users get immediate changes, and the
> writes to the operational database ensure the data will be updated upon
> the next nightly update.
> 1) Will these intermittent writes to the reporting database server
> prevent a log shipping transaction log restore from completing
> successfully?
> 2) If we do transaction log backups once an hour, I assume we can save
> them all and then replay all the tlog backups at one time in the middle
> of the night.
> We tried to setup replication at one point, but our database schema
> would not replicate to a second server and we have not had the
> resources to try to resolve the replication issues.
> Thank You,
> Kevin
>
|||The business case that we are trying to resolve is that we are moving
our reporting database server to a co-location facility. In the past
we had high speed LAN access between our database servers so nightly
backup and restores of our database where not an issue. Our database
is currently about 7GB and it is growing at about 25% per year. Our
WAN connection is a standard point to point T1. 7GB is too much data
to move across the WAN on a nightly basis. LiteSpeed will shrink the
database to 1.5GB, but still that is very large compared to 35MB
transaction logs.
Is it possible to make a copy of the log shipping target database on
the remote site? In essence could we use the log shipping target
database to stage a production database across the WAN.
Operational Database --> Log Shipping --> Staging Database (RO) -->
Backup/detach/copy/etc --> Reporting Database (RW)
Thank You for the quick feedback,
Kevin
|||Upgrade to SQL Server 2005 and use database snapshots - exactly what they
are designed for.
You can't detach the database and copy, nor can you stop SQL Server and copy
the files and use those as inputs to sp_attach_db.
Tony.
Tony Rogerson
SQL Server MVP
http://sqlserverfaq.com - free video tutorials
"kghammond" <kghammond@.nrscorp.com> wrote in message
news:1133800712.975757.55690@.f14g2000cwb.googlegro ups.com...
> The business case that we are trying to resolve is that we are moving
> our reporting database server to a co-location facility. In the past
> we had high speed LAN access between our database servers so nightly
> backup and restores of our database where not an issue. Our database
> is currently about 7GB and it is growing at about 25% per year. Our
> WAN connection is a standard point to point T1. 7GB is too much data
> to move across the WAN on a nightly basis. LiteSpeed will shrink the
> database to 1.5GB, but still that is very large compared to 35MB
> transaction logs.
> Is it possible to make a copy of the log shipping target database on
> the remote site? In essence could we use the log shipping target
> database to stage a production database across the WAN.
> Operational Database --> Log Shipping --> Staging Database (RO) -->
> Backup/detach/copy/etc --> Reporting Database (RW)
> Thank You for the quick feedback,
> Kevin
>
|||Hello,
Just check in to see if our MVPs suggestions were helpful. If anything is
unclear, get in touch.
Sophie Guo
Microsoft Online Partner Support
Get Secure! - www.microsoft.com/security
================================================== ===
When responding to posts, please "Reply to Group" via your newsreader so
that others may learn and benefit from your issue.
================================================== ===
This posting is provided "AS IS" with no warranties, and confers no rights.
|||Our final solution is to utilize Double Take. We plan to replicate our
..mdf and .ldf across the wan using double take. Then once a night we
will pause Double Take and copy the .mdf and .ldf out of the replica.
We will then mount that as our reporting database.
So far all seems well. Scripting out an automated way to attach the
database and verify that it is not corrupt and then backout to the
previous database if necessary is proving to be a little tricky. If
only DTS allowed better flow control...
SQL 2005 will be on its way soon, but not soon enough
Thank you all for your input,
Kevin
|||Hello,
Thanks for let me know the current status. If you have any questions or
concerns in future, feel free to post back!
Sophie Guo
Microsoft Online Partner Support
When responding to posts, please "Reply to Group" via your newsreader so
that others may learn and benefit from your issue.
================================================== ===
Business-Critical Phone Support (BCPS) provides you with technical phone
support at no charge during critical LAN outages or "business down"
situations. This benefit is available 24 hours a day, 7 days a week to all
Microsoft technology partners in the United States and Canada.
This and other support options are available here:
BCPS:
https://partner.microsoft.com/US/tec...rview/40010469
Others: https://partner.microsoft.com/US/tec...pportoverview/
If you are outside the United States, please visit our International
Support page:
http://support.microsoft.com/common/international.aspx
================================================== ===
This posting is provided "AS IS" with no warranties, and confers no rights.

Thursday, March 8, 2012

aduit connections to SQL Server

On sql 2005 is there any audit logs on by default that log connections to sq
l
server and what database the connection was to.
If not on, is there one that you can turn on.
Not to long ago we had some records deleted out of the sql database
mysteriously. I am trying to find out how that happened and to keep it from
happening again.There is a default trace in SQL Server 2005 that is enabled
by default. You can find information about the default trace
in books online and this link has more info:
http://www.mssqltips.com/tip.asp?tip=1111
-Sue
On Wed, 14 Feb 2007 08:49:05 -0800, Andrew
<Andrew@.discussions.microsoft.com> wrote:

>On sql 2005 is there any audit logs on by default that log connections to s
ql
>server and what database the connection was to.
>If not on, is there one that you can turn on.
>Not to long ago we had some records deleted out of the sql database
>mysteriously. I am trying to find out how that happened and to keep it from
>happening again.|||Andrew
In addition on Sue's reply
> Not to long ago we had some records deleted out of the sql database
> mysteriously. I am trying to find out how that happened and to keep it
> from
> happening again.
You can use
1) DDL Triggers
2) Event notifications
"Andrew" <Andrew@.discussions.microsoft.com> wrote in message
news:8B1D1BB3-9EC2-4C31-87FA-064A1FB5C2E0@.microsoft.com...
> On sql 2005 is there any audit logs on by default that log connections to
> sql
> server and what database the connection was to.
> If not on, is there one that you can turn on.
> Not to long ago we had some records deleted out of the sql database
> mysteriously. I am trying to find out how that happened and to keep it
> from
> happening again.
>

Friday, February 24, 2012

ADO.Net connection timeout and transaction log

Hi newsgroup.
Can someone help me figure out what happens to the transaction log
file when you run an ADO.Net transaction that times out? I have
configure the database recovery model to SIMPLE so normally the
transaction log file should never grow large, yet in some occasions, I
found it growing to several Gigs.
If the connection times out in the C# code and is eventually closed,
is Sql Server notified of it and does it roll back the transaction
right away or does it keep the transaction waiting around, thus
filling up the transaction log? What is the default time out before
Sql Server notices that the client got disconnected and that the
transaction must be rolled back?
I have the following code running. Is it possible that it would make
the transaction log grow when exceptions occur
// m_Connection is a SqlConnection object.
DbTransaction trans = m_Connection.BeginTransaction();
try
{
// Some transaction uploading large files and
// updating several records in several tables
trans.Commit();
}
catch( SqlException sqlEx )
{
trans.Rollback();
if( sqlEx.Number == SqlTimeoutError )
{
throw new NeedRetryException( "SqlTimeoutException
is occured.", sqlEx );
}
else
{
throw;
}
}
catch
{
trans.Rollback();
throw;
}
finally
{
trans.Dispose();
m_Connection.Close();
}
Any help or pointer greatly appreciated.
Tony.
> I have configure the database recovery model to SIMPLE so normally the
> transaction log file should never grow large, yet in some occasions, I
> found it growing to several Gigs.
The SIMPLE mode doesn't mean you won't get a large tran log. It's the size
of a transaction that matters. In other words, you can have your database in
the simple mode, but if you run a large single transaction, you can still
blow up your log file.
Linchi
"tony.newsgrps@.gmail.com" wrote:

> Hi newsgroup.
> Can someone help me figure out what happens to the transaction log
> file when you run an ADO.Net transaction that times out? I have
> configure the database recovery model to SIMPLE so normally the
> transaction log file should never grow large, yet in some occasions, I
> found it growing to several Gigs.
> If the connection times out in the C# code and is eventually closed,
> is Sql Server notified of it and does it roll back the transaction
> right away or does it keep the transaction waiting around, thus
> filling up the transaction log? What is the default time out before
> Sql Server notices that the client got disconnected and that the
> transaction must be rolled back?
> I have the following code running. Is it possible that it would make
> the transaction log grow when exceptions occur
> // m_Connection is a SqlConnection object.
> DbTransaction trans = m_Connection.BeginTransaction();
> try
> {
> // Some transaction uploading large files and
> // updating several records in several tables
> trans.Commit();
> }
> catch( SqlException sqlEx )
> {
> trans.Rollback();
> if( sqlEx.Number == SqlTimeoutError )
> {
> throw new NeedRetryException( "SqlTimeoutException
> is occured.", sqlEx );
> }
> else
> {
> throw;
> }
> }
> catch
> {
> trans.Rollback();
> throw;
> }
> finally
> {
> trans.Dispose();
> m_Connection.Close();
> }
> Any help or pointer greatly appreciated.
> Tony.
>
|||Hi Linchi,
Thank you for the answer. The transactions I'm running may be large at
times (i.e. inserting perhaps 50MB of data), but the transaction log
grows larger than 1GB. That's a 20 time larger so to me it does not
sound like 1GB is a normal size for the transaction log. I can expect
100MB to be, but 1GB seems way too high.
What else could be the reason for that log file growing?
Thank you,
Tony.
On Dec 11, 10:44 pm, Linchi Shea
<LinchiS...@.discussions.microsoft.com> wrote:[vbcol=seagreen]
> The SIMPLE mode doesn't mean you won't get a large tran log. It's the size
> of a transaction that matters. In other words, you can have your database in
> the simple mode, but if you run a large single transaction, you can still
> blow up your log file.
> Linchi
> "tony.newsg...@.gmail.com" wrote:
>
>
|||Hi Mike,
I capped the log file to 1GB with no autogrow so when I say that the
transaction log file is growing, I actually mean that I'm reaching
that limit of 1GB and any subsequent transaction is failing because of
that.
I believe 1GB should be plenty enough. The largest data I would insert
is probably around 50MB split in say 1500 rows, all in one
transaction. I can't imagine that doing that transaction would
generate 1GB of logs in the transaction log file, would it? All my
transactions are serialized, so I'm sure that I don't have 20 of these
large ones running at the same time.
Thank you,
Tony
On Dec 12, 1:08 am, Mike Hodgson <e1mins...@.gmail.com> wrote:[vbcol=seagreen]
> The log file will only grow when it is full, you are modifying more data
> in the database and autogrow is turned on for the transaction log.
> (Yes, this is regardless of whether you are using the SIMPLE recovery
> model or not. The SIMPLE recovery model just means the tlog is
> truncated from time to time based on a number of factors.) What are
> your autogrow settings on that logical file?
> --
> Mikehttp://sqlnerd.blogspot.com
> tony.newsg...@.gmail.com wrote:
>
>
>
>
>
>

ADO.Net connection timeout and transaction log

Hi newsgroup.
Can someone help me figure out what happens to the transaction log
file when you run an ADO.Net transaction that times out? I have
configure the database recovery model to SIMPLE so normally the
transaction log file should never grow large, yet in some occasions, I
found it growing to several Gigs.
If the connection times out in the C# code and is eventually closed,
is Sql Server notified of it and does it roll back the transaction
right away or does it keep the transaction waiting around, thus
filling up the transaction log? What is the default time out before
Sql Server notices that the client got disconnected and that the
transaction must be rolled back?
I have the following code running. Is it possible that it would make
the transaction log grow when exceptions occur
// m_Connection is a SqlConnection object.
DbTransaction trans = m_Connection.BeginTransaction();
try
{
// Some transaction uploading large files and
// updating several records in several tables
trans.Commit();
}
catch( SqlException sqlEx )
{
trans.Rollback();
if( sqlEx.Number == SqlTimeoutError )
{
throw new NeedRetryException( "SqlTimeoutException
is occured.", sqlEx );
}
else
{
throw;
}
}
catch
{
trans.Rollback();
throw;
}
finally
{
trans.Dispose();
m_Connection.Close();
}
Any help or pointer greatly appreciated.
Tony.> I have configure the database recovery model to SIMPLE so normally the
> transaction log file should never grow large, yet in some occasions, I
> found it growing to several Gigs.
The SIMPLE mode doesn't mean you won't get a large tran log. It's the size
of a transaction that matters. In other words, you can have your database in
the simple mode, but if you run a large single transaction, you can still
blow up your log file.
Linchi
"tony.newsgrps@.gmail.com" wrote:
> Hi newsgroup.
> Can someone help me figure out what happens to the transaction log
> file when you run an ADO.Net transaction that times out? I have
> configure the database recovery model to SIMPLE so normally the
> transaction log file should never grow large, yet in some occasions, I
> found it growing to several Gigs.
> If the connection times out in the C# code and is eventually closed,
> is Sql Server notified of it and does it roll back the transaction
> right away or does it keep the transaction waiting around, thus
> filling up the transaction log? What is the default time out before
> Sql Server notices that the client got disconnected and that the
> transaction must be rolled back?
> I have the following code running. Is it possible that it would make
> the transaction log grow when exceptions occur
> // m_Connection is a SqlConnection object.
> DbTransaction trans = m_Connection.BeginTransaction();
> try
> {
> // Some transaction uploading large files and
> // updating several records in several tables
> trans.Commit();
> }
> catch( SqlException sqlEx )
> {
> trans.Rollback();
> if( sqlEx.Number == SqlTimeoutError )
> {
> throw new NeedRetryException( "SqlTimeoutException
> is occured.", sqlEx );
> }
> else
> {
> throw;
> }
> }
> catch
> {
> trans.Rollback();
> throw;
> }
> finally
> {
> trans.Dispose();
> m_Connection.Close();
> }
> Any help or pointer greatly appreciated.
> Tony.
>|||Hi Linchi,
Thank you for the answer. The transactions I'm running may be large at
times (i.e. inserting perhaps 50MB of data), but the transaction log
grows larger than 1GB. That's a 20 time larger so to me it does not
sound like 1GB is a normal size for the transaction log. I can expect
100MB to be, but 1GB seems way too high.
What else could be the reason for that log file growing?
Thank you,
Tony.
On Dec 11, 10:44 pm, Linchi Shea
<LinchiS...@.discussions.microsoft.com> wrote:
> > I have configure the database recovery model to SIMPLE so normally the
> > transaction log file should never grow large, yet in some occasions, I
> > found it growing to several Gigs.
> The SIMPLE mode doesn't mean you won't get a large tran log. It's the size
> of a transaction that matters. In other words, you can have your database in
> the simple mode, but if you run a large single transaction, you can still
> blow up your log file.
> Linchi
> "tony.newsg...@.gmail.com" wrote:
> > Hi newsgroup.
> > Can someone help me figure out what happens to the transaction log
> > file when you run an ADO.Net transaction that times out? I have
> > configure the database recovery model to SIMPLE so normally the
> > transaction log file should never grow large, yet in some occasions, I
> > found it growing to several Gigs.
> > If the connection times out in the C# code and is eventually closed,
> > is Sql Server notified of it and does it roll back the transaction
> > right away or does it keep the transaction waiting around, thus
> > filling up the transaction log? What is the default time out before
> > Sql Server notices that the client got disconnected and that the
> > transaction must be rolled back?
> > I have the following code running. Is it possible that it would make
> > the transaction log grow when exceptions occur
> > // m_Connection is a SqlConnection object.
> > DbTransaction trans = m_Connection.BeginTransaction();
> > try
> > {
> > // Some transaction uploading large files and
> > // updating several records in several tables
> > trans.Commit();
> > }
> > catch( SqlException sqlEx )
> > {
> > trans.Rollback();
> > if( sqlEx.Number == SqlTimeoutError )
> > {
> > throw new NeedRetryException( "SqlTimeoutException
> > is occured.", sqlEx );
> > }
> > else
> > {
> > throw;
> > }
> > }
> > catch
> > {
> > trans.Rollback();
> > throw;
> > }
> > finally
> > {
> > trans.Dispose();
> > m_Connection.Close();
> > }
> > Any help or pointer greatly appreciated.
> > Tony.|||This is a multi-part message in MIME format.
--060303080103060806080308
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
The log file will only grow when it is full, you are modifying more data
in the database and autogrow is turned on for the transaction log.
(Yes, this is regardless of whether you are using the SIMPLE recovery
model or not. The SIMPLE recovery model just means the tlog is
truncated from time to time based on a number of factors.) What are
your autogrow settings on that logical file?
--
Mike
http://sqlnerd.blogspot.com
tony.newsgrps@.gmail.com wrote:
> Hi Linchi,
> Thank you for the answer. The transactions I'm running may be large at
> times (i.e. inserting perhaps 50MB of data), but the transaction log
> grows larger than 1GB. That's a 20 time larger so to me it does not
> sound like 1GB is a normal size for the transaction log. I can expect
> 100MB to be, but 1GB seems way too high.
> What else could be the reason for that log file growing?
> Thank you,
> Tony.
> On Dec 11, 10:44 pm, Linchi Shea
> <LinchiS...@.discussions.microsoft.com> wrote:
>> I have configure the database recovery model to SIMPLE so normally the
>> transaction log file should never grow large, yet in some occasions, I
>> found it growing to several Gigs.
>> The SIMPLE mode doesn't mean you won't get a large tran log. It's the size
>> of a transaction that matters. In other words, you can have your database in
>> the simple mode, but if you run a large single transaction, you can still
>> blow up your log file.
>> Linchi
>> "tony.newsg...@.gmail.com" wrote:
>> Hi newsgroup.
>> Can someone help me figure out what happens to the transaction log
>> file when you run an ADO.Net transaction that times out? I have
>> configure the database recovery model to SIMPLE so normally the
>> transaction log file should never grow large, yet in some occasions, I
>> found it growing to several Gigs.
>> If the connection times out in the C# code and is eventually closed,
>> is Sql Server notified of it and does it roll back the transaction
>> right away or does it keep the transaction waiting around, thus
>> filling up the transaction log? What is the default time out before
>> Sql Server notices that the client got disconnected and that the
>> transaction must be rolled back?
>> I have the following code running. Is it possible that it would make
>> the transaction log grow when exceptions occur
>> // m_Connection is a SqlConnection object.
>> DbTransaction trans = m_Connection.BeginTransaction();
>> try
>> {
>> // Some transaction uploading large files and
>> // updating several records in several tables
>> trans.Commit();
>> }
>> catch( SqlException sqlEx )
>> {
>> trans.Rollback();
>> if( sqlEx.Number == SqlTimeoutError )
>> {
>> throw new NeedRetryException( "SqlTimeoutException
>> is occured.", sqlEx );
>> }
>> else
>> {
>> throw;
>> }
>> }
>> catch
>> {
>> trans.Rollback();
>> throw;
>> }
>> finally
>> {
>> trans.Dispose();
>> m_Connection.Close();
>> }
>> Any help or pointer greatly appreciated.
>> Tony.
>
--060303080103060806080308
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
<tt>The log file will only grow when it is full, you are modifying more
data in the database and autogrow is turned on for the transaction
log. (Yes, this is regardless of whether you are using the SIMPLE
recovery model or not. The SIMPLE recovery model just means the tlog
is truncated from time to time based on a number of factors.) What are
your autogrow settings on that logical file?</tt><br>
<pre class="moz-signature" cols="72">--
Mike
<a class="moz-txt-link-freetext" href="http://links.10026.com/?link=http://sqlnerd.blogspot.com</a></pre>">http://sqlnerd.blogspot.com">http://sqlnerd.blogspot.com</a></pre>
<br>
<br>
<a class="moz-txt-link-abbreviated" href="http://links.10026.com/?link=mailto:tony.newsgrps@.gmail.com">tony.newsgrps@.gmail.com</a> wrote:
<blockquote
cite="mid:fe4cb261-5e49-4db9-9b19-5c67612ed748@.w40g2000hsb.googlegroups.com"
type="cite">
<pre wrap="">Hi Linchi,
Thank you for the answer. The transactions I'm running may be large at
times (i.e. inserting perhaps 50MB of data), but the transaction log
grows larger than 1GB. That's a 20 time larger so to me it does not
sound like 1GB is a normal size for the transaction log. I can expect
100MB to be, but 1GB seems way too high.
What else could be the reason for that log file growing?
Thank you,
Tony.
On Dec 11, 10:44 pm, Linchi Shea
<a class="moz-txt-link-rfc2396E" href="http://links.10026.com/?link=mailto:LinchiS...@.discussions.microsoft.com"><LinchiS...@.discussions.microsoft.com></a> wrote:
</pre>
<blockquote type="cite">
<blockquote type="cite">
<pre wrap="">I have configure the database recovery model to SIMPLE so normally the
transaction log file should never grow large, yet in some occasions, I
found it growing to several Gigs.
</pre>
</blockquote>
<pre wrap="">The SIMPLE mode doesn't mean you won't get a large tran log. It's the size
of a transaction that matters. In other words, you can have your database in
the simple mode, but if you run a large single transaction, you can still
blow up your log file.
Linchi
<a class="moz-txt-link-rfc2396E" href="http://links.10026.com/?link=mailto:tony.newsg...@.gmail.com">"tony.newsg...@.gmail.com"</a> wrote:
</pre>
<blockquote type="cite">
<pre wrap="">Hi newsgroup.
</pre>
</blockquote>
<blockquote type="cite">
<pre wrap="">Can someone help me figure out what happens to the transaction log
file when you run an ADO.Net transaction that times out? I have
configure the database recovery model to SIMPLE so normally the
transaction log file should never grow large, yet in some occasions, I
found it growing to several Gigs.
</pre>
</blockquote>
<blockquote type="cite">
<pre wrap="">If the connection times out in the C# code and is eventually closed,
is Sql Server notified of it and does it roll back the transaction
right away or does it keep the transaction waiting around, thus
filling up the transaction log? What is the default time out before
Sql Server notices that the client got disconnected and that the
transaction must be rolled back?
</pre>
</blockquote>
<blockquote type="cite">
<pre wrap="">I have the following code running. Is it possible that it would make
the transaction log grow when exceptions occur
// m_Connection is a SqlConnection object.
DbTransaction trans = m_Connection.BeginTransaction();
try
{
// Some transaction uploading large files and
// updating several records in several tables
trans.Commit();
}
catch( SqlException sqlEx )
{
trans.Rollback();
if( sqlEx.Number == SqlTimeoutError )
{
throw new NeedRetryException( "SqlTimeoutException
is occured.", sqlEx );
}
else
{
throw;
}
}
catch
{
trans.Rollback();
throw;
}
finally
{
trans.Dispose();
m_Connection.Close();
}
</pre>
</blockquote>
<blockquote type="cite">
<pre wrap="">Any help or pointer greatly appreciated.
Tony.
</pre>
</blockquote>
</blockquote>
<pre wrap=""><!-->
</pre>
</blockquote>
</body>
</html>
--060303080103060806080308--|||Hi Mike,
I capped the log file to 1GB with no autogrow so when I say that the
transaction log file is growing, I actually mean that I'm reaching
that limit of 1GB and any subsequent transaction is failing because of
that.
I believe 1GB should be plenty enough. The largest data I would insert
is probably around 50MB split in say 1500 rows, all in one
transaction. I can't imagine that doing that transaction would
generate 1GB of logs in the transaction log file, would it? All my
transactions are serialized, so I'm sure that I don't have 20 of these
large ones running at the same time.
Thank you,
Tony
On Dec 12, 1:08 am, Mike Hodgson <e1mins...@.gmail.com> wrote:
> The log file will only grow when it is full, you are modifying more data
> in the database and autogrow is turned on for the transaction log.
> (Yes, this is regardless of whether you are using the SIMPLE recovery
> model or not. The SIMPLE recovery model just means the tlog is
> truncated from time to time based on a number of factors.) What are
> your autogrow settings on that logical file?
> --
> Mikehttp://sqlnerd.blogspot.com
> tony.newsg...@.gmail.com wrote:
> > Hi Linchi,
> > Thank you for the answer. The transactions I'm running may be large at
> > times (i.e. inserting perhaps 50MB of data), but the transaction log
> > grows larger than 1GB. That's a 20 time larger so to me it does not
> > sound like 1GB is a normal size for the transaction log. I can expect
> > 100MB to be, but 1GB seems way too high.
> > What else could be the reason for that log file growing?
> > Thank you,
> > Tony.
> > On Dec 11, 10:44 pm, Linchi Shea
> > <LinchiS...@.discussions.microsoft.com> wrote:
> >> I have configure the database recovery model to SIMPLE so normally the
> >> transaction log file should never grow large, yet in some occasions, I
> >> found it growing to several Gigs.
> >> The SIMPLE mode doesn't mean you won't get a large tran log. It's the size
> >> of a transaction that matters. In other words, you can have your database in
> >> the simple mode, but if you run a large single transaction, you can still
> >> blow up your log file.
> >> Linchi
> >> "tony.newsg...@.gmail.com" wrote:
> >> Hi newsgroup.
> >> Can someone help me figure out what happens to the transaction log
> >> file when you run an ADO.Net transaction that times out? I have
> >> configure the database recovery model to SIMPLE so normally the
> >> transaction log file should never grow large, yet in some occasions, I
> >> found it growing to several Gigs.
> >> If the connection times out in the C# code and is eventually closed,
> >> is Sql Server notified of it and does it roll back the transaction
> >> right away or does it keep the transaction waiting around, thus
> >> filling up the transaction log? What is the default time out before
> >> Sql Server notices that the client got disconnected and that the
> >> transaction must be rolled back?
> >> I have the following code running. Is it possible that it would make
> >> the transaction log grow when exceptions occur
> >> // m_Connection is a SqlConnection object.
> >> DbTransaction trans = m_Connection.BeginTransaction();
> >> try
> >> {
> >> // Some transaction uploading large files and
> >> // updating several records in several tables
> >> trans.Commit();
> >> }
> >> catch( SqlException sqlEx )
> >> {
> >> trans.Rollback();
> >> if( sqlEx.Number == SqlTimeoutError )
> >> {
> >> throw new NeedRetryException( "SqlTimeoutException
> >> is occured.", sqlEx );
> >> }
> >> else
> >> {
> >> throw;
> >> }
> >> }
> >> catch
> >> {
> >> trans.Rollback();
> >> throw;
> >> }
> >> finally
> >> {
> >> trans.Dispose();
> >> m_Connection.Close();
> >> }
> >> Any help or pointer greatly appreciated.
> >> Tony.|||This is a multi-part message in MIME format.
--090507060108060606060307
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Do you have many indexes on the tables you're modifying? If you have,
say, 20 indexes on a table and you insert 1 row in that table, SQL
Server will do a lot more I/Os than just what's needed for the insert in
the clustered index (or heap) because it has to maintain all the
nonclustered indexes.
Another possibility is your transaction is causing a lot of page splits
(which will also cause more modifications than you are expecting because
it's not just changing the page the row is inserted into, it would be
doing extra index page allocations, updating IAM pages, etc.). Can you
check the logical fragmentation of the relevant indexes? If they're
fragmenting quickly this could be the reason. (It also depends on the
values of the index keys you're inserting, as to whether they get
inserted in the middle of the index (possibly causing splits) or towards
the end of the index.)
These are just a couple of a number of reasons that your transaction log
might be growing more rapidly than you expect. Where I was going with
the autogrow settings on the log file was that if it was set to grow by
a large amount (eg. 1GB or 50% or something like that) then when it
needed to grow from its initial small size, of say 200MB, then the
autogrow interval might have caused it to exceed 1GB in a single grow.
But if you have it set at 1GB (and you don't want it to grow any
larger), why don't you turn off autogrow and set initial size of the log
at a hard 1GB? (Not that that's particularly important - just a bit
better than autogrowing files during transactions.)
If you're really keen to know the answer to your dilemma you could look
through the log to see what it's putting in there. There is no official
Microsoft way of doing this. In SQL 2000 there was an undocumented
function called ::fn_dblog() that you could use to look in the current
transaction log (not log backups) but the interpreting the output is a
bit difficult (as it's not documented and not very straight forward).
You query it like this:
select * from ::fn_dblog(null, null);
I don't know, off the top of my head, if this function is still in SQL
2005 (I suspect not) or if there are official replacements for it in the
DMVs (I suspect not again). A better way would be to buy one of the 3rd
party tools on the market (Lumigent Log Explorer springs to mind) that
do pretty much the same thing (and more) except in a more user-friendly
interface.
--
Mike
http://sqlnerd.blogspot.com
tony.newsgrps@.gmail.com wrote:
> Hi Mike,
> I capped the log file to 1GB with no autogrow so when I say that the
> transaction log file is growing, I actually mean that I'm reaching
> that limit of 1GB and any subsequent transaction is failing because of
> that.
> I believe 1GB should be plenty enough. The largest data I would insert
> is probably around 50MB split in say 1500 rows, all in one
> transaction. I can't imagine that doing that transaction would
> generate 1GB of logs in the transaction log file, would it? All my
> transactions are serialized, so I'm sure that I don't have 20 of these
> large ones running at the same time.
> Thank you,
> Tony
> On Dec 12, 1:08 am, Mike Hodgson <e1mins...@.gmail.com> wrote:
>> The log file will only grow when it is full, you are modifying more data
>> in the database and autogrow is turned on for the transaction log.
>> (Yes, this is regardless of whether you are using the SIMPLE recovery
>> model or not. The SIMPLE recovery model just means the tlog is
>> truncated from time to time based on a number of factors.) What are
>> your autogrow settings on that logical file?
>> --
>> Mikehttp://sqlnerd.blogspot.com
>> tony.newsg...@.gmail.com wrote:
>> Hi Linchi,
>> Thank you for the answer. The transactions I'm running may be large at
>> times (i.e. inserting perhaps 50MB of data), but the transaction log
>> grows larger than 1GB. That's a 20 time larger so to me it does not
>> sound like 1GB is a normal size for the transaction log. I can expect
>> 100MB to be, but 1GB seems way too high.
>> What else could be the reason for that log file growing?
>> Thank you,
>> Tony.
>> On Dec 11, 10:44 pm, Linchi Shea
>> <LinchiS...@.discussions.microsoft.com> wrote:
>> I have configure the database recovery model to SIMPLE so normally the
>> transaction log file should never grow large, yet in some occasions, I
>> found it growing to several Gigs.
>> The SIMPLE mode doesn't mean you won't get a large tran log. It's the size
>> of a transaction that matters. In other words, you can have your database in
>> the simple mode, but if you run a large single transaction, you can still
>> blow up your log file.
>> Linchi
>> "tony.newsg...@.gmail.com" wrote:
>> Hi newsgroup.
>> Can someone help me figure out what happens to the transaction log
>> file when you run an ADO.Net transaction that times out? I have
>> configure the database recovery model to SIMPLE so normally the
>> transaction log file should never grow large, yet in some occasions, I
>> found it growing to several Gigs.
>> If the connection times out in the C# code and is eventually closed,
>> is Sql Server notified of it and does it roll back the transaction
>> right away or does it keep the transaction waiting around, thus
>> filling up the transaction log? What is the default time out before
>> Sql Server notices that the client got disconnected and that the
>> transaction must be rolled back?
>> I have the following code running. Is it possible that it would make
>> the transaction log grow when exceptions occur
>> // m_Connection is a SqlConnection object.
>> DbTransaction trans = m_Connection.BeginTransaction();
>> try
>> {
>> // Some transaction uploading large files and
>> // updating several records in several tables
>> trans.Commit();
>> }
>> catch( SqlException sqlEx )
>> {
>> trans.Rollback();
>> if( sqlEx.Number == SqlTimeoutError )
>> {
>> throw new NeedRetryException( "SqlTimeoutException
>> is occured.", sqlEx );
>> }
>> else
>> {
>> throw;
>> }
>> }
>> catch
>> {
>> trans.Rollback();
>> throw;
>> }
>> finally
>> {
>> trans.Dispose();
>> m_Connection.Close();
>> }
>> Any help or pointer greatly appreciated.
>> Tony.
>
--090507060108060606060307
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
<title></title>
</head>
<body bgcolor="#ffffff" text="#000000">
<tt>Do you have many indexes on the tables you're modifying? If you
have, say, 20 indexes on a table and you insert 1 row in that table,
SQL Server will do a lot more I/Os than just what's needed for the
insert in the clustered index (or heap) because it has to maintain all
the nonclustered indexes.<br>
<br>
Another possibility is your transaction is causing a lot of page splits
(which will also cause more modifications than you are expecting
because it's not just changing the page the row is inserted into, it
would be doing extra index page allocations, updating IAM pages,
etc.). Can you check the logical fragmentation of the relevant
indexes? If they're fragmenting quickly this could be the reason. (It
also depends on the values of the index keys you're inserting, as to
whether they get inserted in the middle of the index (possibly causing
splits) or towards the end of the index.)<br>
<br>
These are just a couple of a number of reasons that your transaction
log might be growing more rapidly than you expect. Where I was going
with the autogrow settings on the log file was that if it was set to
grow by a large amount (eg. 1GB or 50% or something like that) then
when it needed to grow from its initial small size, of say 200MB, then
the autogrow interval might have caused it to exceed 1GB in a single
grow. But if you have it set at 1GB (and you don't want it to grow any
larger), why don't you turn off autogrow and set initial size of the
log at a hard 1GB? (Not that that's particularly important - just a
bit better than autogrowing files during transactions.)<br>
<br>
If you're really keen to know the answer to your dilemma you could look
through the log to see what it's putting in there. There is no
official Microsoft way of doing this. In SQL 2000 there was an
undocumented function called ::fn_dblog() that you could use to look in
the current transaction log (not log backups) but the interpreting the
output is a bit difficult (as it's not documented and not very straight
forward). You query it like this:<br>
<br>
select * from ::fn_dblog(null, null);<br>
<br>
I don't know, off the top of my head, if this function is still in SQL
2005 (I suspect not) or if there are official replacements for it in
the DMVs (I suspect not again). A better way would be to buy one of
the 3rd party tools on the market (Lumigent Log Explorer springs to
mind) that do pretty much the same thing (and more) except in a more
user-friendly interface.</tt><br>
<pre class="moz-signature" cols="72">--
Mike
<a class="moz-txt-link-freetext" href="http://links.10026.com/?link=http://sqlnerd.blogspot.com</a></pre>">http://sqlnerd.blogspot.com">http://sqlnerd.blogspot.com</a></pre>
<br>
<a class="moz-txt-link-abbreviated" href="http://links.10026.com/?link=mailto:tony.newsgrps@.gmail.com">tony.newsgrps@.gmail.com</a> wrote:
<blockquote
cite="mid:7329d471-f68e-423a-820b-59fa77390c2d@.i29g2000prf.googlegroups.com"
type="cite">
<pre wrap="">Hi Mike,
I capped the log file to 1GB with no autogrow so when I say that the
transaction log file is growing, I actually mean that I'm reaching
that limit of 1GB and any subsequent transaction is failing because of
that.
I believe 1GB should be plenty enough. The largest data I would insert
is probably around 50MB split in say 1500 rows, all in one
transaction. I can't imagine that doing that transaction would
generate 1GB of logs in the transaction log file, would it? All my
transactions are serialized, so I'm sure that I don't have 20 of these
large ones running at the same time.
Thank you,
Tony
On Dec 12, 1:08 am, Mike Hodgson <a class="moz-txt-link-rfc2396E" href="http://links.10026.com/?link=mailto:e1mins...@.gmail.com"><e1mins...@.gmail.com></a> wrote:
</pre>
<blockquote type="cite">
<pre wrap="">The log file will only grow when it is full, you are modifying more data
in the database and autogrow is turned on for the transaction log.
(Yes, this is regardless of whether you are using the SIMPLE recovery
model or not. The SIMPLE recovery model just means the tlog is
truncated from time to time based on a number of factors.) What are
your autogrow settings on that logical file?
--
Mikehttp://sqlnerd.blogspot.com
<a class="moz-txt-link-abbreviated" href="http://links.10026.com/?link=mailto:tony.newsg...@.gmail.com">tony.newsg...@.gmail.com</a> wrote:
</pre>
<blockquote type="cite">
<pre wrap="">Hi Linchi,
</pre>
</blockquote>
<blockquote type="cite">
<pre wrap="">Thank you for the answer. The transactions I'm running may be large at
times (i.e. inserting perhaps 50MB of data), but the transaction log
grows larger than 1GB. That's a 20 time larger so to me it does not
sound like 1GB is a normal size for the transaction log. I can expect
100MB to be, but 1GB seems way too high.
What else could be the reason for that log file growing?
</pre>
</blockquote>
<blockquote type="cite">
<pre wrap="">Thank you,
Tony.
</pre>
</blockquote>
<blockquote type="cite">
<pre wrap="">On Dec 11, 10:44 pm, Linchi Shea
<a class="moz-txt-link-rfc2396E" href="http://links.10026.com/?link=mailto:LinchiS...@.discussions.microsoft.com"><LinchiS...@.discussions.microsoft.com></a> wrote:
</pre>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<pre wrap="">I have configure the database recovery model to SIMPLE so normally the
transaction log file should never grow large, yet in some occasions, I
found it growing to several Gigs.
</pre>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<pre wrap="">The SIMPLE mode doesn't mean you won't get a large tran log. It's the size
of a transaction that matters. In other words, you can have your database in
the simple mode, but if you run a large single transaction, you can still
blow up your log file.
</pre>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<pre wrap="">Linchi
</pre>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<pre wrap=""><a class="moz-txt-link-rfc2396E" href="http://links.10026.com/?link=mailto:tony.newsg...@.gmail.com">"tony.newsg...@.gmail.com"</a> wrote:
</pre>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<pre wrap="">Hi newsgroup.
</pre>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<pre wrap="">Can someone help me figure out what happens to the transaction log
file when you run an ADO.Net transaction that times out? I have
configure the database recovery model to SIMPLE so normally the
transaction log file should never grow large, yet in some occasions, I
found it growing to several Gigs.
</pre>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<pre wrap="">If the connection times out in the C# code and is eventually closed,
is Sql Server notified of it and does it roll back the transaction
right away or does it keep the transaction waiting around, thus
filling up the transaction log? What is the default time out before
Sql Server notices that the client got disconnected and that the
transaction must be rolled back?
</pre>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<pre wrap="">I have the following code running. Is it possible that it would make
the transaction log grow when exceptions occur
// m_Connection is a SqlConnection object.
DbTransaction trans = m_Connection.BeginTransaction();
try
{
// Some transaction uploading large files and
// updating several records in several tables
trans.Commit();
}
catch( SqlException sqlEx )
{
trans.Rollback();
if( sqlEx.Number == SqlTimeoutError )
{
throw new NeedRetryException( "SqlTimeoutException
is occured.", sqlEx );
}
else
{
throw;
}
}
catch
{
trans.Rollback();
throw;
}
finally
{
trans.Dispose();
m_Connection.Close();
}
</pre>
</blockquote>
</blockquote>
</blockquote>
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<pre wrap="">Any help or pointer greatly appreciated.
Tony.
</pre>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
<pre wrap=""><!-->
</pre>
</blockquote>
</body>
</html>
--090507060108060606060307--

ADO.Net connection timeout and transaction log

Hi newsgroup.
Can someone help me figure out what happens to the transaction log
file when you run an ADO.Net transaction that times out? I have
configure the database recovery model to SIMPLE so normally the
transaction log file should never grow large, yet in some occasions, I
found it growing to several Gigs.
If the connection times out in the C# code and is eventually closed,
is Sql Server notified of it and does it roll back the transaction
right away or does it keep the transaction waiting around, thus
filling up the transaction log? What is the default time out before
Sql Server notices that the client got disconnected and that the
transaction must be rolled back?
I have the following code running. Is it possible that it would make
the transaction log grow when exceptions occur
// m_Connection is a SqlConnection object.
DbTransaction trans = m_Connection.BeginTransaction();
try
{
// Some transaction uploading large files and
// updating several records in several tables
trans.Commit();
}
catch( SqlException sqlEx )
{
trans.Rollback();
if( sqlEx.Number == SqlTimeoutError )
{
throw new NeedRetryException( "SqlTimeoutException
is occured.", sqlEx );
}
else
{
throw;
}
}
catch
{
trans.Rollback();
throw;
}
finally
{
trans.Dispose();
m_Connection.Close();
}
Any help or pointer greatly appreciated.
Tony.> I have configure the database recovery model to SIMPLE so normally the
> transaction log file should never grow large, yet in some occasions, I
> found it growing to several Gigs.
The SIMPLE mode doesn't mean you won't get a large tran log. It's the size
of a transaction that matters. In other words, you can have your database in
the simple mode, but if you run a large single transaction, you can still
blow up your log file.
Linchi
"tony.newsgrps@.gmail.com" wrote:

> Hi newsgroup.
> Can someone help me figure out what happens to the transaction log
> file when you run an ADO.Net transaction that times out? I have
> configure the database recovery model to SIMPLE so normally the
> transaction log file should never grow large, yet in some occasions, I
> found it growing to several Gigs.
> If the connection times out in the C# code and is eventually closed,
> is Sql Server notified of it and does it roll back the transaction
> right away or does it keep the transaction waiting around, thus
> filling up the transaction log? What is the default time out before
> Sql Server notices that the client got disconnected and that the
> transaction must be rolled back?
> I have the following code running. Is it possible that it would make
> the transaction log grow when exceptions occur
> // m_Connection is a SqlConnection object.
> DbTransaction trans = m_Connection.BeginTransaction();
> try
> {
> // Some transaction uploading large files and
> // updating several records in several tables
> trans.Commit();
> }
> catch( SqlException sqlEx )
> {
> trans.Rollback();
> if( sqlEx.Number == SqlTimeoutError )
> {
> throw new NeedRetryException( "SqlTimeoutException
> is occured.", sqlEx );
> }
> else
> {
> throw;
> }
> }
> catch
> {
> trans.Rollback();
> throw;
> }
> finally
> {
> trans.Dispose();
> m_Connection.Close();
> }
> Any help or pointer greatly appreciated.
> Tony.
>|||Hi Linchi,
Thank you for the answer. The transactions I'm running may be large at
times (i.e. inserting perhaps 50MB of data), but the transaction log
grows larger than 1GB. That's a 20 time larger so to me it does not
sound like 1GB is a normal size for the transaction log. I can expect
100MB to be, but 1GB seems way too high.
What else could be the reason for that log file growing?
Thank you,
Tony.
On Dec 11, 10:44 pm, Linchi Shea
<LinchiS...@.discussions.microsoft.com> wrote:[vbcol=seagreen]
> The SIMPLE mode doesn't mean you won't get a large tran log. It's the size
> of a transaction that matters. In other words, you can have your database
in
> the simple mode, but if you run a large single transaction, you can still
> blow up your log file.
> Linchi
> "tony.newsg...@.gmail.com" wrote:
>
>
>
>|||The log file will only grow when it is full, you are modifying more data
in the database and autogrow is turned on for the transaction log.
(Yes, this is regardless of whether you are using the SIMPLE recovery
model or not. The SIMPLE recovery model just means the tlog is
truncated from time to time based on a number of factors.) What are
your autogrow settings on that logical file?
Mike
http://sqlnerd.blogspot.com
tony.newsgrps@.gmail.com wrote:
> Hi Linchi,
> Thank you for the answer. The transactions I'm running may be large at
> times (i.e. inserting perhaps 50MB of data), but the transaction log
> grows larger than 1GB. That's a 20 time larger so to me it does not
> sound like 1GB is a normal size for the transaction log. I can expect
> 100MB to be, but 1GB seems way too high.
> What else could be the reason for that log file growing?
> Thank you,
> Tony.
> On Dec 11, 10:44 pm, Linchi Shea
> <LinchiS...@.discussions.microsoft.com> wrote:
>
>|||Hi Mike,
I capped the log file to 1GB with no autogrow so when I say that the
transaction log file is growing, I actually mean that I'm reaching
that limit of 1GB and any subsequent transaction is failing because of
that.
I believe 1GB should be plenty enough. The largest data I would insert
is probably around 50MB split in say 1500 rows, all in one
transaction. I can't imagine that doing that transaction would
generate 1GB of logs in the transaction log file, would it? All my
transactions are serialized, so I'm sure that I don't have 20 of these
large ones running at the same time.
Thank you,
Tony
On Dec 12, 1:08 am, Mike Hodgson <e1mins...@.gmail.com> wrote:[vbcol=seagreen]
> The log file will only grow when it is full, you are modifying more data
> in the database and autogrow is turned on for the transaction log.
> (Yes, this is regardless of whether you are using the SIMPLE recovery
> model or not. The SIMPLE recovery model just means the tlog is
> truncated from time to time based on a number of factors.) What are
> your autogrow settings on that logical file?
> --
> Mikehttp://sqlnerd.blogspot.com
> tony.newsg...@.gmail.com wrote:
>
>
>
>
>
>
>
>
>
>
>
>|||Do you have many indexes on the tables you're modifying? If you have,
say, 20 indexes on a table and you insert 1 row in that table, SQL
Server will do a lot more I/Os than just what's needed for the insert in
the clustered index (or heap) because it has to maintain all the
nonclustered indexes.
Another possibility is your transaction is causing a lot of page splits
(which will also cause more modifications than you are expecting because
it's not just changing the page the row is inserted into, it would be
doing extra index page allocations, updating IAM pages, etc.). Can you
check the logical fragmentation of the relevant indexes? If they're
fragmenting quickly this could be the reason. (It also depends on the
values of the index keys you're inserting, as to whether they get
inserted in the middle of the index (possibly causing splits) or towards
the end of the index.)
These are just a couple of a number of reasons that your transaction log
might be growing more rapidly than you expect. Where I was going with
the autogrow settings on the log file was that if it was set to grow by
a large amount (eg. 1GB or 50% or something like that) then when it
needed to grow from its initial small size, of say 200MB, then the
autogrow interval might have caused it to exceed 1GB in a single grow.
But if you have it set at 1GB (and you don't want it to grow any
larger), why don't you turn off autogrow and set initial size of the log
at a hard 1GB? (Not that that's particularly important - just a bit
better than autogrowing files during transactions.)
If you're really keen to know the answer to your dilemma you could look
through the log to see what it's putting in there. There is no official
Microsoft way of doing this. In SQL 2000 there was an undocumented
function called ::fn_dblog() that you could use to look in the current
transaction log (not log backups) but the interpreting the output is a
bit difficult (as it's not documented and not very straight forward).
You query it like this:
select * from ::fn_dblog(null, null);
I don't know, off the top of my head, if this function is still in SQL
2005 (I suspect not) or if there are official replacements for it in the
DMVs (I suspect not again). A better way would be to buy one of the 3rd
party tools on the market (Lumigent Log Explorer springs to mind) that
do pretty much the same thing (and more) except in a more user-friendly
interface.
Mike
http://sqlnerd.blogspot.com
tony.newsgrps@.gmail.com wrote:
> Hi Mike,
> I capped the log file to 1GB with no autogrow so when I say that the
> transaction log file is growing, I actually mean that I'm reaching
> that limit of 1GB and any subsequent transaction is failing because of
> that.
> I believe 1GB should be plenty enough. The largest data I would insert
> is probably around 50MB split in say 1500 rows, all in one
> transaction. I can't imagine that doing that transaction would
> generate 1GB of logs in the transaction log file, would it? All my
> transactions are serialized, so I'm sure that I don't have 20 of these
> large ones running at the same time.
> Thank you,
> Tony
> On Dec 12, 1:08 am, Mike Hodgson <e1mins...@.gmail.com> wrote:
>
>

Sunday, February 12, 2012

administrator

Do user need to be administrators to create Data Driven Subscriptions?
If i log into my machine as one of my users the create data driven
subscription button is not there, but if i log in as myself or
administrator(both in the admin group) I can see it. also noticed that
non-admin users cannot see the datasources even though the credentials are
stored on reportserver.
thanks
--
---
Yes, I searched first :)Ok, the user can see the create data driven subscription now, but when i am
on the step where i pick the data source non are available?|||ok i figured out all my problems
"Kmistic" wrote:
> Do user need to be administrators to create Data Driven Subscriptions?
> If i log into my machine as one of my users the create data driven
> subscription button is not there, but if i log in as myself or
> administrator(both in the admin group) I can see it. also noticed that
> non-admin users cannot see the datasources even though the credentials are
> stored on reportserver.
> thanks
> --
> ---
> Yes, I searched first :)

Thursday, February 9, 2012

adjusting size of templog does not remain

A bit bizarre I could be overlooking something since tempdb is not like a us
er database.
Our log size for tempdb (templog) is 15mb, I need to increase this to 20mb.
I do so through the SQL Enterprise Manager. Adjust the size, hit apply and
the size is adjusted!
When I run a Financial apps that uses tempdb, I receive the message that the
templog files are full backup and ...
When I go back to tempdb the templog file is back at 15mb.
Jeff
Message posted via http://www.droptable.comPerhaps you restarted SQL Server in between. Size for tempdb at startup you
set using ALTER
DATABASE.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Jeffrey Sheldon via droptable.com" <forum@.droptable.com> wrote in message
news:c83bbdcd541d46a4bb16394dcf882151@.SQ
droptable.com...
>A bit bizarre I could be overlooking something since tempdb is not like a u
ser database.
> Our log size for tempdb (templog) is 15mb, I need to increase this to 20mb
.
> I do so through the SQL Enterprise Manager. Adjust the size, hit apply an
d the size is adjusted!
> When I run a Financial apps that uses tempdb, I receive the message that t
he templog files are
> full backup and ...
> When I go back to tempdb the templog file is back at 15mb.
> Jeff
> --
> Message posted via http://www.droptable.com|||actually i thought the same..but when i change it on one of my local machine
using EM and restart sql server i see size of tempdb after the modification.
.
"Tibor Karaszi" wrote:

> Perhaps you restarted SQL Server in between. Size for tempdb at startup yo
u set using ALTER
> DATABASE.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> http://www.sqlug.se/
>
> "Jeffrey Sheldon via droptable.com" <forum@.droptable.com> wrote in messa
ge
> news:c83bbdcd541d46a4bb16394dcf882151@.SQ
droptable.com...
>
>|||Ahh yes, the lightbulb went off. tempdb needs the ALTERDATABASE command to
adjust its data/log files.
Thanks
Jeff
Message posted via http://www.droptable.com|||Ok now I am stumped.
I did
use master
ALTER DATABASE tempdb
MODIFY FILE
(name = templog,
size = 20MB)
go
The templog is then adjusted to 20mb.
I run my financial stored procedure and receive the message templog ran out
of space blah blah...
So I go back to review and increase so more, but lo and behold templog is Ba
ck at 15mb!!!!
A stored procedure that simply taking data from one database (less than 2000
records) and using temp tables to do some calculations make the log file sh
ift back to 15mb.
Any ideas?
Jeff
Message posted via http://www.droptable.com|||What size does master..sysaltfiles specify for the file? Also, perhaps you h
ave autoshrink turned on
for the tempdb database?
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Jeffrey Sheldon via droptable.com" <forum@.droptable.com> wrote in message
news:29ce9a8218c945548d497982ae85ee55@.SQ
droptable.com...
> Ok now I am stumped.
> I did
> use master
> ALTER DATABASE tempdb
> MODIFY FILE
> (name = templog,
> size = 20MB)
> go
> The templog is then adjusted to 20mb.
> I run my financial stored procedure and receive the message templog ran ou
t of space blah blah...
> So I go back to review and increase so more, but lo and behold templog is
Back at 15mb!!!!
> A stored procedure that simply taking data from one database (less than 20
00 records) and using
> temp tables to do some calculations make the log file shift back to 15mb.
> Any ideas?
> Jeff
> --
> Message posted via http://www.droptable.com

adjusting size of templog does not remain

A bit bizarre I could be overlooking something since tempdb is not like a user database.
Our log size for tempdb (templog) is 15mb, I need to increase this to 20mb.
I do so through the SQL Enterprise Manager. Adjust the size, hit apply and the size is adjusted!
When I run a Financial apps that uses tempdb, I receive the message that the templog files are full backup and ...
When I go back to tempdb the templog file is back at 15mb.
Jeff
Message posted via http://www.sqlmonster.com
Perhaps you restarted SQL Server in between. Size for tempdb at startup you set using ALTER
DATABASE.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Jeffrey Sheldon via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:c83bbdcd541d46a4bb16394dcf882151@.SQLMonster.c om...
>A bit bizarre I could be overlooking something since tempdb is not like a user database.
> Our log size for tempdb (templog) is 15mb, I need to increase this to 20mb.
> I do so through the SQL Enterprise Manager. Adjust the size, hit apply and the size is adjusted!
> When I run a Financial apps that uses tempdb, I receive the message that the templog files are
> full backup and ...
> When I go back to tempdb the templog file is back at 15mb.
> Jeff
> --
> Message posted via http://www.sqlmonster.com
|||actually i thought the same..but when i change it on one of my local machine
using EM and restart sql server i see size of tempdb after the modification...
"Tibor Karaszi" wrote:

> Perhaps you restarted SQL Server in between. Size for tempdb at startup you set using ALTER
> DATABASE.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> http://www.sqlug.se/
>
> "Jeffrey Sheldon via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
> news:c83bbdcd541d46a4bb16394dcf882151@.SQLMonster.c om...
>
>
|||Ahh yes, the lightbulb went off. tempdb needs the ALTERDATABASE command to adjust its data/log files.
Thanks
Jeff
Message posted via http://www.sqlmonster.com
|||Ok now I am stumped.
I did
use master
ALTER DATABASE tempdb
MODIFY FILE
(name = templog,
size = 20MB)
go
The templog is then adjusted to 20mb.
I run my financial stored procedure and receive the message templog ran out of space blah blah...
So I go back to review and increase so more, but lo and behold templog is Back at 15mb!!!!
A stored procedure that simply taking data from one database (less than 2000 records) and using temp tables to do some calculations make the log file shift back to 15mb.
Any ideas?
Jeff
Message posted via http://www.sqlmonster.com
|||What size does master..sysaltfiles specify for the file? Also, perhaps you have autoshrink turned on
for the tempdb database?
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
http://www.sqlug.se/
"Jeffrey Sheldon via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:29ce9a8218c945548d497982ae85ee55@.SQLMonster.c om...
> Ok now I am stumped.
> I did
> use master
> ALTER DATABASE tempdb
> MODIFY FILE
> (name = templog,
> size = 20MB)
> go
> The templog is then adjusted to 20mb.
> I run my financial stored procedure and receive the message templog ran out of space blah blah...
> So I go back to review and increase so more, but lo and behold templog is Back at 15mb!!!!
> A stored procedure that simply taking data from one database (less than 2000 records) and using
> temp tables to do some calculations make the log file shift back to 15mb.
> Any ideas?
> Jeff
> --
> Message posted via http://www.sqlmonster.com