Thursday, March 22, 2012
AdventureWorks database
custom install.
You might be able to get it now by just reinstalling the tools and making
sure you choose the sample databases. Otherwise you can download it from
here
http://www.microsoft.com/downloads/details.aspx?FamilyID=E719ECF7-9F46-4312-AF89-6AD8702E4E6E&displaylang=en
HTH
Kalen Delaney, SQL Server MVP
http://sqlblog.com
"Alan T" <alanpltseNOSPAM@.yahoo.com.au> wrote in message
news:ucBHEhvLHHA.780@.TK2MSFTNGP03.phx.gbl...
> Is this database 'AdventureWorks' comes with SQL 2005?
> I read the book 'Mastering Microsoft SQL Server 2005' by Sybex says this
> comes with SQL Server 2005. But I cannot find that in the Management
> Studio.
>
Alan T wrote:
> Thanks.
> I downloaded the 'SqlServerSamples.msi' and installed it.
> However, I looked thro' the Samples folder cannot find the AdventureWorks
> database.
You should download AdventureWorksDB.msi or AdventureWorksDBCI.msi
Razvan
|||Yes, the SqlServerSamples.msi is code samples. There are separate downloads
for the sample databases, which you can see in the link I gave you are
larger files than the Samples download. You can choose to download AW in
either a case-sensitive collation or case-INsensitive.
HTH
Kalen Delaney, SQL Server MVP
http://sqlblog.com
"Razvan Socol" <rsocol@.gmail.com> wrote in message
news:1167832817.697717.13790@.48g2000cwx.googlegrou ps.com...
> Alan T wrote:
> You should download AdventureWorksDB.msi or AdventureWorksDBCI.msi
> Razvan
>
Tuesday, March 20, 2012
Advenced select
Hi,
I have 3 tables.
1. TableValues with MainID, FieldID and value
2. TableFields with FieldID, FieldName, FieldTypeID
3. TableFieldsTypes with FieldTypeID, TypeName
I want to create select that returns me columns as fields names and values taken from TableValues and types taken from TableFieldsTypes.
How can I do this?
Thank's
Alexei
Select tf.FieldName
tft.fieldType,
tv.[value] AS Value
FROM TableValues tv
INNER JOIN TableFields tf ON (tf.fieldID = tv.fieldID)
INNER JOIN TableFieldsTypes tft ON (tf.FieldTypeID = tft.FieldTypeID)
|||
Hi,
I mean that:
1. In table of fields I have 144 fields.
2. Not to all fields I have the value.
3. In the summary I need table with 144 columns based on fields.
Thank's
Alexei
Switch so you select from the table that contain the rows you must have in the output, left outer join any that might not exist.
Modifying RPaulo's answer a bit, highlighted my changes with bold.
SELECT tf.FieldName
tft.fieldType,
tv.[value] AS Value
FROM TableFields tf
INNER JOIN TableFieldsTypes tft ON (tf.FieldTypeID = tft.FieldTypeID)
LEFT OUTER JOIN TableValues tv ON (tf.fieldID = tv.fieldID)
If TableFieldTypes also does not contain a record for each TableFields record you should change it to an outer join.
|||Hi,
Try to read the third note. I want 144 columns and that for all of them I will get the values.
For example:
MainID | Column1 | Column2 | .... | Column144
1 | Value 1 | Value 2 | .... | Value 144
2 | Value 1 | Value 2 | .... | Value 144
3 | Value 1 | Value 2 | .... | Value 144
.....
Values doesn't same.
Thank's
Alexei
Do you really need to get the data like that from SQL? Depending on how you are presenting the data this method could also work and is much simpler than creating a query that gives you the results your last post suggests.
Divide it up into 2 queries. 1 to get the column titles with ID and another to get the data sorted by TableValues.MainID, TableValues.FieldID
This sorting will make sure the data appears in the correct orders.
select FieldID, FieldName from TableFields order by FieldID
From this query I would be able to build up the columnheaders
select MainID, FieldID, [Value] from TableValues order by MainID, FieldID
A new MainID creates a new row and add the [Value] to the column that has the same FieldID.
|||Hi,
I really nead the data like that.
Thank's
Alexei
Something like this:
SELECT MainID, TypeName, f1 AS f1, f2 AS f2, f3 AS f3,f4 AS f4, f5 AS f5, f6 AS f6, f7 AS f7, f8 AS f8, f9 AS f9, f10 AS f10, f11 AS f11, f12 AS f12, f13 AS f13, f14 AS f14
FROM (Select tv.MainID, tft.TypeName, tf.FieldName, tv.[value] AS Value
FROM TableValues tv
LEFT JOIN TableFields tf ON (tf.fieldID = tv.fieldID)
LEFT JOIN TableFieldsType tft ON (tf.FieldTypeID = tft.FieldTypeID)) p
PIVOT (AVG(value) for fieldName IN ([f1], [f2], [f3], [f4], [f5], [f6], [f7], [f8], [f9], [f10], [f11], [f12], [f13], [f14]))
AS PVT
ORDER BY MainID
Hi,
I need something like this. But in T-SQL I don't have the pivot tables.
Thank's
Alexei
Use CASE instead.
SELECT MainID, TypeName,
AVG(CASE WHEN FieldName ='f1' THEN value END) as f1,
AVG(CASE WHEN FieldName ='f2' THEN value END) as f2,
AVG(CASE WHEN FieldName ='f3' THEN value END) as f3,
AVG(CASE WHEN FieldName ='f4' THEN value END) as f4,
AVG(CASE WHEN FieldName ='f5' THEN value END) as f5,
AVG(CASE WHEN FieldName ='f6' THEN value END) as f6,
AVG(CASE WHEN FieldName ='f7' THEN value END) as f7,
AVG(CASE WHEN FieldName ='f8' THEN value END) as f8,
AVG(CASE WHEN FieldName ='f9' THEN value END) as f9,
AVG(CASE WHEN FieldName ='f10' THEN value END) as f10,
AVG(CASE WHEN FieldName ='f11' THEN value END) as f11,
AVG(CASE WHEN FieldName ='f12' THEN value END) as f12,
AVG(CASE WHEN FieldName ='f13' THEN value END) as f13,
AVG(CASE WHEN FieldName ='f14' THEN value END) as f14
FROM (Select tv.MainID, tft.TypeName, tf.FieldName, tv.[value] AS Value
FROM TableValues tv
LEFT JOIN TableFields tf ON (tf.fieldID = tv.fieldID)
LEFT JOIN TableFieldsType tft ON (tf.FieldTypeID = tft.FieldTypeID)) p
WHERE fieldName IN ('f1', 'f2', 'f3', 'f4', 'f5', 'f6', 'f7', 'f8', 'f9', 'f10', 'f11', 'f12', 'f13', 'f14')
GROUP BY MainID, TypeName
ORDER BY MainID
Hi,
I don't want to calculate the average.
Thank's
Alexei
Hi,
I have tried this but it doesn't work:
SELECT ElevatorID, FieldType,
AVG(CASE WHEN FieldID =1 THEN FieldValue END) as f1,
AVG(CASE WHEN FieldID =2 THEN FieldValue END) as f2,
AVG(CASE WHEN FieldID =3 THEN FieldValue END) as f3,
AVG(CASE WHEN FieldID =4 THEN FieldValue END) as f4
FROM (Select tv.ElevatorID, tft.FieldType, tf.FieldID, tv.FieldValue
FROM TechnicalPage tv
LEFT JOIN TechnicalPageFields tf ON (tf.fieldID = tv.fieldID)
LEFT JOIN TechnicalPageFieldsTypes tft ON (tf.FieldTypeID = tft.FieldTypeID)) p
WHERE FieldType IN (1,2,3,4)
GROUP BY ElevatorID, FieldType
ORDER BY ElevatorID
I get errors:
Server: Msg 409, Level 16, State 2, Line 1
The average aggregate operation cannot take a char data type as an argument.
Server: Msg 409, Level 16, State 1, Line 1
The average aggregate operation cannot take a char data type as an argument.
Server: Msg 409, Level 16, State 1, Line 1
The average aggregate operation cannot take a char data type as an argument.
Server: Msg 409, Level 16, State 1, Line 1
The average aggregate operation cannot take a char data type as an argument.
Thank's
Alexei
Monday, March 19, 2012
Advanced Sql-Shape Query - Help
This is my basic sql shape query:
----
SHAPE {select * from tbl1}
APPEND({SELECT * FROM tbl2 where field1=1} AS RS2 RELATE field TO
field)
----
With this query i get a RecordSet (RS1), who handle all the records
from table tbl1, and a secondary RecordSet (RS2) who handle all the
records from table tbl2, who applies to the criteria that field1=1.
It is possible that RS2 will be empty (zero records) since there is no
record in tbl2 who applies to that criteria.
My wish is to design a query, that will collect only the records from
tbl1, that will have records from tbl2 who applies to the criteria -
that RS2 won't be empty !
I want to influence on the main part of the query (RS1), through the
criteria that is being used in the secondery query (RS2).
I hope that my question is clear enough. thanks !Hi
I have never used SHAPE/OLEDB, but in T-SQL you can do:
SELECT t1.* FROM tbl1 t1 WHERE EXISTS ( SELECT * FROM tbl2 t2 WHERE t1.field
= t2.field )
Therefore you may want to try this as your parent query.
John
"doar123@.gmail.com" wrote:
> Hi,
> This is my basic sql shape query:
> ----
> SHAPE {select * from tbl1}
> APPEND({SELECT * FROM tbl2 where field1=1} AS RS2 RELATE field TO
> field)
> ----
> With this query i get a RecordSet (RS1), who handle all the records
> from table tbl1, and a secondary RecordSet (RS2) who handle all the
> records from table tbl2, who applies to the criteria that field1=1.
> It is possible that RS2 will be empty (zero records) since there is no
> record in tbl2 who applies to that criteria.
> My wish is to design a query, that will collect only the records from
> tbl1, that will have records from tbl2 who applies to the criteria -
> that RS2 won't be empty !
> I want to influence on the main part of the query (RS1), through the
> criteria that is being used in the secondery query (RS2).
> I hope that my question is clear enough. thanks !
>|||thanks, it was helpfull !
Advanced Sql-Shape Query - Help
This is my basic sql shape query:
------------------
SHAPE {select * from tbl1}
APPEND({SELECT * FROM tbl2 where field1=1} AS RS2 RELATE field TO
field)
------------------
With this query i get a RecordSet (RS1), who handle all the records
from table tbl1, and a secondary RecordSet (RS2) who handle all the
records from table tbl2, who applies to the criteria that field1=1.
It is possible that RS2 will be empty (zero records) since there is no
record in tbl2 who applies to that criteria.
My wish is to design a query, that will collect only the records from
tbl1, that will have records from tbl2 who applies to the criteria -
that RS2 won't be empty !
I want to influence on the main part of the query (RS1), through the
criteria that is being used in the secondery query (RS2).
I hope that my question is clear enough. thanks !(doar123@.gmail.com) writes:
> This is my basic sql shape query:
> ------------------
> SHAPE {select * from tbl1}
> APPEND({SELECT * FROM tbl2 where field1=1} AS RS2 RELATE field TO
> field)
> ------------------
> With this query i get a RecordSet (RS1), who handle all the records
> from table tbl1, and a secondary RecordSet (RS2) who handle all the
> records from table tbl2, who applies to the criteria that field1=1.
> It is possible that RS2 will be empty (zero records) since there is no
> record in tbl2 who applies to that criteria.
> My wish is to design a query, that will collect only the records from
> tbl1, that will have records from tbl2 who applies to the criteria -
> that RS2 won't be empty !
> I want to influence on the main part of the query (RS1), through the
> criteria that is being used in the secondery query (RS2).
> I hope that my question is clear enough. thanks !
If I get this right you want this in the SHAPE part:
SELECT * FROM tbl1
WHERE EXISTS (SELECT * FROM tbl2 WHERE tbl2.field = tbl1.fiedl)
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||I didn't know i can use the exists query - I will check it out and get
back to report.
thanks for your help !|||Well, It works like a charm !!!!
thank you !!!
Advanced SELECT for a newbie
I have a table full of Latitudes, Longitudes, address, customername, etc. , I need to grab some input(Latitude, Longitude, range) from the user. So now I have a source lat, long(user) and destination lat, long(rows in dbase). I need to take the 2 points and compute a distance from the user given lat, long to every lat, long in the database and check that distance againt the range given from the user. If the distance is below the range, I need to put that row into a temp table and return the temp table at the end of the stored proc.
As of right now I am completely lost and need some guidance.
I would also like to be able to add the computed distance to a table. Here is the function and stored procedure i have so far...
ALTER PROCEDURE [dbo].[sp_getDistance]
@.srcLat numeric(18,6),
@.srcLong numeric(18,6),
@.range int
AS
BEGIN
SET NOCOUNT ON;
SELECT * FROM dbo.PL_CustomerGeoCode cg
WHERE dbo.fn_computeDistance(@.srcLat, cg.geocodeLat, @.srcLong, cg.geocodeLong) < @.range
END
CREATE FUNCTION fn_computeDistance
(
-- Add the parameters for the function here
@.lat1 numeric(18,6),
@.lat2 numeric(18,6),
@.long1 numeric(18,6),
@.long2 numeric(18,6)
)
RETURNS numeric(18,6)
AS
BEGIN
-- Declare the return variable here
DECLARE @.dist numeric(18,6)
IF ((@.lat1 = @.lat2) AND (@.long1 = @.long2))
SELECT @.dist = 0.0
ELSE
IF (((sin(@.lat1)*sin(@.lat2))+(cos(@.lat1)*cos(@.lat2)*cos(@.long1-@.long2)))) > 1.0
SELECT @.dist = 3963.1*acos(1.0)
ELSE
SELECT @.dist = 3963.1*acos((sin(@.lat1)*sin(@.lat2))+(cos(@.lat1)*cos(@.lat2)*cos(@.long1-@.long2)))
-- Return the result of the function
RETURN @.dist
Thanks,
Kyle
What's the problem you're facing? If you want to add a computed column for the distance to the table, you can use something like:
ALTER TABLE dbo.PL_CustomerGeoCode ADD ComputedDistance AS dbo.fn_computeDistance(@.srcLat, cg.geocodeLat, @.srcLong, cg.geocodeLong)
advanced question - 'xlock' hint on record not locking select * from table
from a table using the primary key on the where clause, inside a transaction
with serializable level. In another connection, I try to do a simple select
* from sametable (also using serializable transaction level) and this
command runs without locking, when it should wait for the exclusive lock
that is held by another connection to be released.
I wrote two scripts to simulate my problem. The first creates a table,
populate it, starts a connection and selects a record using the xlock hint
leaving the transaction and the locks open. Another query analyzer must be
opened (leaving this also open) to run the second script. This script do a
simple select * from tbl. This script returns all records, including the
records that is xlocked on the another connection.
SCRIPT 1
if exists (select 1 from sysobjects where name = 'bla')
drop table bla
go
create table bla (
id int not null,
nome nvarchar(256) not null,
constraint pk_bla primary key clustered (id),
constraint un_bla_nome unique (nome)
)
go
insert into bla values (1, 'Some thing')
insert into bla values (2, 'Some other thing')
go
set transaction isolation level serializable
begin transaction
select *
from bla (xlock, rowlock)
where id=1
execute sp_lock @.@.spid
--rollback transaction
SCRIPT 2 (must be run on another query analyzer window)
set transaction isolation level serializable
begin transaction
select *
from bla (holdlock, rowlock)
execute sp_lock @.@.spid
rollback transactionHi Francisco,
In your example:
1. You do not do any data modifications inside your transactions.
(You can add DBCC OPENTRAN before or after your EXECUTE sp_lock to
make sure that there is "No active open transactions." :-0 )
2. You are using ROWLOCK table hint.
3. You have a table with no column not covered by an index.
As result:
1. Your first script has exclusive lock only on a part of the PK (pk_bla)
2. For your second script, un_bla_nome index's leaf pages have
enough information to produce the desired resultset. SQL Server Optimizer
is smart
enough to noticed it and it chose to scan un_bla_nome index.
(See execution plan for your second script).
So these two transaction (scripts) do not interfere and you are getting
result without any delay.
If you want your second script to wait till first will finish you could:
- In your first script, replace ROWLOCK hint with TABLOCKX.
- OR add column to the table, not covered by the indexes.
- OR drop un_bla_nome unique constraint.
- OR do some data modification (For example: UPDATE bla SET id = id in
your second script)
Hope this helps,
Igor Raytsin
"Francisco D. Kurpiel" <francisco.kurpiel@.westbest.com.br> wrote in message
news:O8ShXmW$DHA.3668@.TK2MSFTNGP09.phx.gbl...
> I have a problem with locking on sql server 2000 sp3. I select one record
> from a table using the primary key on the where clause, inside a
transaction
> with serializable level. In another connection, I try to do a simple
select
> * from sametable (also using serializable transaction level) and this
> command runs without locking, when it should wait for the exclusive lock
> that is held by another connection to be released.
>
> I wrote two scripts to simulate my problem. The first creates a table,
> populate it, starts a connection and selects a record using the xlock hint
> leaving the transaction and the locks open. Another query analyzer must be
> opened (leaving this also open) to run the second script. This script do a
> simple select * from tbl. This script returns all records, including the
> records that is xlocked on the another connection.
>
> SCRIPT 1
> if exists (select 1 from sysobjects where name = 'bla')
> drop table bla
> go
>
> create table bla (
> id int not null,
> nome nvarchar(256) not null,
> constraint pk_bla primary key clustered (id),
> constraint un_bla_nome unique (nome)
> )
> go
>
> insert into bla values (1, 'Some thing')
> insert into bla values (2, 'Some other thing')
> go
>
> set transaction isolation level serializable
>
> begin transaction
>
> select *
> from bla (xlock, rowlock)
> where id=1
>
> execute sp_lock @.@.spid
>
> --rollback transaction
>
> SCRIPT 2 (must be run on another query analyzer window)
>
> set transaction isolation level serializable
>
> begin transaction
>
> select *
> from bla (holdlock, rowlock)
>
> execute sp_lock @.@.spid
>
> rollback transaction
>
>
>|||I appreciate your help.
Your message helps me to understand my problem. The first query locks one
index and the second query data using another index. I have made some
additional tests forcing the index on the second query to use the same index
witch is used by the first query and the locking happened as expected.
This behavior appears to be very strange to me. If some query locks the
table using some index, another query will ignore these locks if the
optimizer chooses another index? We know that the query plan is dependent on
the size of the tables evolved on the query. So, when my database grows, the
query analyzer may select other indexes to run the query, changing the
concurrency behavior? This is more like a bug than a programmer
responsibility.
I have analyzed your suggestions:
* Replace ROWLOCK hint with TABLOCKX
My database will get big and will be accessed by a lot of users. I cannot
use this solution.
* Add column to the table
My production tables have additional columns. But I have this locking
problem because I have indexes on some tables with specific objectives. I
have a where clause on some proc that will use one index and needs to lock
resources, and other queries that will access this tables using other
indexes. May I force all procs to use the same index using a
?index=something? hint. But indexes only will be useful they if they will be
used (sorry for the obvious statement). And if I use more than one index SQL
Server will not honor my locks?
* Drop un_bla_nome unique constraint
As I explained on last item, this is not applicable.
* UPDATE bla SET id = id
Hum? okay, you win. This works, it have xlocked all indexes and add an
intent lock on the page. But this is like a? you know, a ?workaround?.
I also tried to replace my rowlocks by paglocks, by the first script locks
one page and the second locks other page? I?m very confused.
Do you have some link to documentation that explains this entire think?
Thank you.
"Igor Raytsin" <n&i@.cyberus.ca> wrote in message
news:4042b47f_1@.news.cybersurf.net...
> Hi Francisco,
> In your example:
> 1. You do not do any data modifications inside your transactions.
> (You can add DBCC OPENTRAN before or after your EXECUTE sp_lock to
> make sure that there is "No active open transactions." :-0 )
> 2. You are using ROWLOCK table hint.
> 3. You have a table with no column not covered by an index.
> As result:
> 1. Your first script has exclusive lock only on a part of the PK (pk_bla)
> 2. For your second script, un_bla_nome index's leaf pages have
> enough information to produce the desired resultset. SQL Server Optimizer
> is smart
> enough to noticed it and it chose to scan un_bla_nome index.
> (See execution plan for your second script).
> So these two transaction (scripts) do not interfere and you are getting
> result without any delay.
>
> If you want your second script to wait till first will finish you could:
> - In your first script, replace ROWLOCK hint with TABLOCKX.
> - OR add column to the table, not covered by the indexes.
> - OR drop un_bla_nome unique constraint.
> - OR do some data modification (For example: UPDATE bla SET id = id in
> your second script)
>
> Hope this helps,
> Igor Raytsin
>
> "Francisco D. Kurpiel" <francisco.kurpiel@.westbest.com.br> wrote in
message
> news:O8ShXmW$DHA.3668@.TK2MSFTNGP09.phx.gbl...
> > I have a problem with locking on sql server 2000 sp3. I select one
record
> > from a table using the primary key on the where clause, inside a
> transaction
> > with serializable level. In another connection, I try to do a simple
> select
> > * from sametable (also using serializable transaction level) and this
> > command runs without locking, when it should wait for the exclusive lock
> > that is held by another connection to be released.
> >
> >
> >
> > I wrote two scripts to simulate my problem. The first creates a table,
> > populate it, starts a connection and selects a record using the xlock
hint
> > leaving the transaction and the locks open. Another query analyzer must
be
> > opened (leaving this also open) to run the second script. This script do
a
> > simple select * from tbl. This script returns all records, including the
> > records that is xlocked on the another connection.
> >
> >
> >
> > SCRIPT 1
> >
> > if exists (select 1 from sysobjects where name = 'bla')
> >
> > drop table bla
> >
> > go
> >
> >
> >
> > create table bla (
> >
> > id int not null,
> >
> > nome nvarchar(256) not null,
> >
> > constraint pk_bla primary key clustered (id),
> >
> > constraint un_bla_nome unique (nome)
> >
> > )
> >
> > go
> >
> >
> >
> > insert into bla values (1, 'Some thing')
> >
> > insert into bla values (2, 'Some other thing')
> >
> > go
> >
> >
> >
> > set transaction isolation level serializable
> >
> >
> >
> > begin transaction
> >
> >
> >
> > select *
> >
> > from bla (xlock, rowlock)
> >
> > where id=1
> >
> >
> >
> > execute sp_lock @.@.spid
> >
> >
> >
> > --rollback transaction
> >
> >
> >
> > SCRIPT 2 (must be run on another query analyzer window)
> >
> >
> >
> > set transaction isolation level serializable
> >
> >
> >
> > begin transaction
> >
> >
> >
> > select *
> >
> > from bla (holdlock, rowlock)
> >
> >
> >
> > execute sp_lock @.@.spid
> >
> >
> >
> > rollback transaction
> >
> >
> >
> >
> >
> >
>|||Your scripts only simulate your problem. They do not conflict with each
other, and you don't like it, right?
Can you tell us what do you want to achieve? There could be another solution
already.
There is another 'workaround'. You could do this in your first script :
SELECT *
FROM bla (xlock, rowlock)
WHERE nome = (Select nome from bla where id=1)
But yes, SQL Server Optimizer might chose different execution plan when your
database grows.
I would recommend you to read BOL - 'Locking' section.
From 'Key-Range Locking' topic:
"Serializability requires that any query executed during a transaction must
obtain the same set of
rows if it is executed again at some later point within the same
transaction."
That is what happening with your scripts.
Igor Raytsin
"Francisco D. Kurpiel" <francisco.kurpiel@.westbest.com.br> wrote in message
news:%23K5S9$5$DHA.392@.TK2MSFTNGP12.phx.gbl...
> I appreciate your help.
> Your message helps me to understand my problem. The first query locks one
> index and the second query data using another index. I have made some
> additional tests forcing the index on the second query to use the same
index
> witch is used by the first query and the locking happened as expected.
> This behavior appears to be very strange to me. If some query locks the
> table using some index, another query will ignore these locks if the
> optimizer chooses another index? We know that the query plan is dependent
on
> the size of the tables evolved on the query. So, when my database grows,
the
> query analyzer may select other indexes to run the query, changing the
> concurrency behavior? This is more like a bug than a programmer
> responsibility.
> I have analyzed your suggestions:
> * Replace ROWLOCK hint with TABLOCKX
> My database will get big and will be accessed by a lot of users. I cannot
> use this solution.
> * Add column to the table
> My production tables have additional columns. But I have this locking
> problem because I have indexes on some tables with specific objectives. I
> have a where clause on some proc that will use one index and needs to lock
> resources, and other queries that will access this tables using other
> indexes. May I force all procs to use the same index using a
> ?index=something? hint. But indexes only will be useful they if they will
be
> used (sorry for the obvious statement). And if I use more than one index
SQL
> Server will not honor my locks?
> * Drop un_bla_nome unique constraint
> As I explained on last item, this is not applicable.
> * UPDATE bla SET id = id
> Hum? okay, you win. This works, it have xlocked all indexes and add an
> intent lock on the page. But this is like a? you know, a ?workaround?.
> I also tried to replace my rowlocks by paglocks, by the first script locks
> one page and the second locks other page? I?m very confused.
> Do you have some link to documentation that explains this entire think?
> Thank you.
>
>|||Hi, Arno.
The scripts that I sent to the list are just examples to illustrate a
problem on a database that I?m creating to some app. I?m writing procedures
that will access some critical data. Consistency is crucial for this data,
because the data *is* money on some tables. I?m trying to use updlock to
avoid conversion dead lock, because I don?t have a retry mechanism. I don?t
send the complete panorama to the list to not seem arrogant. Sorry... :)
I have a paranoid contract, that doesn?t allow me to reveal business rule,
so I have to illustrate. Let?s say I have a table that controls the balance
of some clients. This will be a big table. I have two processes that run in
parallel:
First: Verifies if the sum of balance is big enough to complete to complete
the process. To complete the process I decrease the balance of the clients
according to some rule. In the above example I subtract a fixed amount from
all clients. The clients are located using a field that is not on the
primary key. So an index was created with the cash of the client to be used
as a covering index (you will say: a-ha!). This is a query-update process,
so I have to use updlock on the query to avoid conversion deadlock.
Second: This is an independent process. This process may change the cash of
the client based on his id. This also is a query-update process, also needs
the updlock on the query to avoid conversion deadlock.
The problem starts when I notice that SQL Server is not honoring the
updlocks, allowing both process acquire this lock on the same record. On the
example the client with id 6 is locked by two connections with updlocks.
This supposes to be impossible.
Again I don?t want to seem arrogant, but I understand range locking. At
least the aspects that?s on the article you sent me. But this problem is
different: Two selects with updlock on the same table cannot run at same
time on separated transactions. Why they run?
I hope this help you to understand my problem. Thank you again and sorry for
my poor English.
/************** SCRIPT 1 **************/
-- CREATING AND POPULATING TABLE
set nocount on
go
while @.@.trancount>0
rollback transaction
go
if exists (select 1 from sysobjects where name='clientbalance')
drop table clientbalance
go
create table clientbalance (
clientid bigint not null,
buyprocess int,
cash money not null,
anythingelse nvarchar(500) default N'not in any key',
constraint pk_clientcash primary key clustered (clientid)
)
-- TO BE USED AS A COVERING INDEX ABOVE
create index ix_buyprocess on clientbalance (buyprocess, cash)
go
insert into clientbalance values (1, 1, 1000, default)
insert into clientbalance values (2, 1, 1000, default)
insert into clientbalance values (3, 1, 1000, default)
insert into clientbalance values (4, 1, 1000, default)
insert into clientbalance values (5, 5, 1000, default)
insert into clientbalance values (6, 5, 1000, default)
insert into clientbalance values (7, 5, 1000, default)
insert into clientbalance values (8, 5, 1000, default)
insert into clientbalance values (9, 1, 1000, default)
insert into clientbalance values (10, 1, 1000, default)
insert into clientbalance values (11, 1, 1000, default)
insert into clientbalance values (12, 1, 1000, default)
insert into clientbalance values (13, 1, 1000, default)
insert into clientbalance values (14, 1, 1000, default)
go
set transaction isolation level serializable
go
-- OKAY, BEGINNING THE PROCESS
begin transaction
declare @.buyprocess int -- PROC PARAMETER
declare @.procamount money -- PROC PARAMETER
declare @.availableamount money -- select sum() result
set @.procamount = 5000.00
set @.buyprocess = 5
-- CALCULATING AVAILABLE AMOUNT using update lock
set @.availableamount = (select sum(cash)
from clientbalance (updlock)
where buyprocess = @.buyprocess)
declare @.dbname nvarchar(30)
set @.dbname = db_name()
execute sp_lock2 @.dbname, @.@.spid
-- IF MONEY IS OKAY TO PROCESS
-- on this example the test is false
if @.availableamount >= @.procamount begin
-- on this example this will never run
print 'cash okay.'
update clientbalance
set cash = cash - 1000 -- JUST AS EXAMPLE
where buyprocess = @.buyprocess
end else begin
print 'no money, no buying.'
-- rollback
-- procedure rollback things
end
-- rollback transaction
-- KEEP TRANSACTION OPEN AND RUN THE SECOND SCRIPT.
-- IT SUPOSE TO LOCK BECAUSE OF THE CLIENTID 6,
-- THAT HAVE A UPDLOCK KEPT BY THIS TRANSACTION
/************** SCRIPT 2 **************/
set transaction isolation level serializable
begin transaction
-- SELECTING DATA FROM TABLE USING OTHER INDEX
select *
from clientbalance (updlock) -- MUST LOCK HERE
where clientid=6
declare @.dbname nvarchar(30)
set @.dbname=db_name()
execute sp_lock2 @.dbname, @.@.spid
-- BASED ON THE RESULT OF THIS SELECT THE PROCEDURE
-- WILL CHANGE THE CASH OF THE CLIENT.
-- ...
rollback transaction|||Hello Francisco,
Let me express my thoughts about this situation. I still think that the
problem lies on the logical level.
1. Do you have a business rule that says: "You have to have these two
processes running in parallel."?
Or it says: "You should have two independent processes."?
Are they really independent or they 'quasi-independent'?
If they are 'quasi-independent' or have to run in parallel, then you need to
implement some
kind of messaging system to allow the processes to collaborate with each
other.
So, one could say to other: "You stay, I go." and then "You can go now."
Maybe you need to create an extra table and put some flags in it.
2. If these processes are truly independent (which they are now),
what would happen if the second process started before the first one?
Will you get correct results? Is it allowable? Will it break any business
rule?
It seems like you want the second process wait till the first finish its
part.
Then they are 'quasi-independent'. See the note above.
3. Technical part.
Quote from BOL: "Only one transaction can obtain an update (U) lock to a
resource at a time."
And no one says: "Two selects with updlock on the same table cannot run at
same
time on separated transactions."
That what is happening. One process gets Update lock on ix_buyprocess
and another gets Update lock on pk_clientcash.
If you want, you could try this:
FROM clientbalance (UPDLOCK, TABLOCK)
4. Another quote from BOL: " Keep the transaction as short as possible."
In the first script I would do this:
UPDATE clientbalance
SET cash = cash - 1000
WHERE buyprocess = @.buyprocess
AND ( SELECT sum(cash) from clientbalance where buyprocess = @.buyprocess)
>= @.procamount
5. My English is not excellent either :)
Hope it helps,
Igor Raytsin
P.S. And who is Arno?
"Francisco D. Kurpiel" <francisco.kurpiel@.westbest.com.br> wrote in message
news:OkgBlwJAEHA.220@.TK2MSFTNGP09.phx.gbl...
> Hi, Arno.
> The scripts that I sent to the list are just examples to illustrate a
> problem on a database that I?m creating to some app. I?m writing
procedures
> that will access some critical data. Consistency is crucial for this data,
> because the data *is* money on some tables. I?m trying to use updlock to
> avoid conversion dead lock, because I don?t have a retry mechanism. I don?
t
> send the complete panorama to the list to not seem arrogant. Sorry... :)
> I have a paranoid contract, that doesn?t allow me to reveal business rule,
> so I have to illustrate. Let?s say I have a table that controls the
balance
> of some clients. This will be a big table. I have two processes that run
in
> parallel:
> First: Verifies if the sum of balance is big enough to complete to
complete
> the process. To complete the process I decrease the balance of the clients
> according to some rule. In the above example I subtract a fixed amount
from
> all clients. The clients are located using a field that is not on the
> primary key. So an index was created with the cash of the client to be
used
> as a covering index (you will say: a-ha!). This is a query-update process,
> so I have to use updlock on the query to avoid conversion deadlock.
> Second: This is an independent process. This process may change the cash
of
> the client based on his id. This also is a query-update process, also
needs
> the updlock on the query to avoid conversion deadlock.
> The problem starts when I notice that SQL Server is not honoring the
> updlocks, allowing both process acquire this lock on the same record. On
the
> example the client with id 6 is locked by two connections with updlocks.
> This supposes to be impossible.
> Again I don?t want to seem arrogant, but I understand range locking. At
> least the aspects that?s on the article you sent me. But this problem is
> different: Two selects with updlock on the same table cannot run at same
> time on separated transactions. Why they run?
> I hope this help you to understand my problem. Thank you again and sorry
for
> my poor English.
> /************** SCRIPT 1 **************/
> -- CREATING AND POPULATING TABLE
> set nocount on
> go
> while @.@.trancount>0
> rollback transaction
> go
> if exists (select 1 from sysobjects where name='clientbalance')
> drop table clientbalance
> go
> create table clientbalance (
> clientid bigint not null,
> buyprocess int,
> cash money not null,
> anythingelse nvarchar(500) default N'not in any key',
> constraint pk_clientcash primary key clustered (clientid)
> )
> -- TO BE USED AS A COVERING INDEX ABOVE
> create index ix_buyprocess on clientbalance (buyprocess, cash)
> go
> insert into clientbalance values (1, 1, 1000, default)
> insert into clientbalance values (2, 1, 1000, default)
> insert into clientbalance values (3, 1, 1000, default)
> insert into clientbalance values (4, 1, 1000, default)
> insert into clientbalance values (5, 5, 1000, default)
> insert into clientbalance values (6, 5, 1000, default)
> insert into clientbalance values (7, 5, 1000, default)
> insert into clientbalance values (8, 5, 1000, default)
> insert into clientbalance values (9, 1, 1000, default)
> insert into clientbalance values (10, 1, 1000, default)
> insert into clientbalance values (11, 1, 1000, default)
> insert into clientbalance values (12, 1, 1000, default)
> insert into clientbalance values (13, 1, 1000, default)
> insert into clientbalance values (14, 1, 1000, default)
> go
> set transaction isolation level serializable
> go
> -- OKAY, BEGINNING THE PROCESS
> begin transaction
> declare @.buyprocess int -- PROC PARAMETER
> declare @.procamount money -- PROC PARAMETER
> declare @.availableamount money -- select sum() result
> set @.procamount = 5000.00
> set @.buyprocess = 5
> -- CALCULATING AVAILABLE AMOUNT using update lock
> set @.availableamount = (select sum(cash)
> from clientbalance (updlock)
> where buyprocess = @.buyprocess)
> declare @.dbname nvarchar(30)
> set @.dbname = db_name()
> execute sp_lock2 @.dbname, @.@.spid
> -- IF MONEY IS OKAY TO PROCESS
> -- on this example the test is false
> if @.availableamount >= @.procamount begin
> -- on this example this will never run
> print 'cash okay.'
> update clientbalance
> set cash = cash - 1000 -- JUST AS EXAMPLE
> where buyprocess = @.buyprocess
> end else begin
> print 'no money, no buying.'
> -- rollback
> -- procedure rollback things
> end
> -- rollback transaction
>
> -- KEEP TRANSACTION OPEN AND RUN THE SECOND SCRIPT.
> -- IT SUPOSE TO LOCK BECAUSE OF THE CLIENTID 6,
> -- THAT HAVE A UPDLOCK KEPT BY THIS TRANSACTION
> /************** SCRIPT 2 **************/
> set transaction isolation level serializable
> begin transaction
> -- SELECTING DATA FROM TABLE USING OTHER INDEX
> select *
> from clientbalance (updlock) -- MUST LOCK HERE
> where clientid=6
> declare @.dbname nvarchar(30)
> set @.dbname=db_name()
> execute sp_lock2 @.dbname, @.@.spid
> -- BASED ON THE RESULT OF THIS SELECT THE PROCEDURE
> -- WILL CHANGE THE CASH OF THE CLIENT.
> -- ...
> rollback transaction
>
>|||Sorry, I?ve made a mistake... :)
Well, both processes are online and user-connected. The app was not
projected to say ?your request is being processed?. The application will be
accessed by many users at a time. This ?proceses? are resources of the
application. I have to grant that they will not affect each other. I?m the
DBA. I cannot say: ?do you have to change the application flow because of
the SQL Server updlock implementation?.
You?re right to say that my updlocks is locking different resources. Now is
clear to me that SQL Server?s select locking strategy is intent to grant
some ACID properties, disallowing, for example, phantom reads on
serializable mode. And the major use of updlock is to prevent deadlock on
users that?s running the same process on the database (conversion deadlock).
If SQL Server chooses to use the same plan to all users the updlock will
lock the same index and the users will be correctly serialized avoiding
deadlocks. But SQL Server does not grant that the same procedure will use
the same index, so updlock may be not honored. And if I have two independent
processes, using different indexes that may be serialized with updlock? I
have to lock the entire table? And if the processes may run many times a
second, while other users are trying to insert and update data? This hole
thing because updlock does not lock the primary key if using covering index?
The BOL says: ?UPDLOCK: Use update locks instead of shared locks while
reading a table, and hold locks until the end of the statement or
transaction. UPDLOCK has the advantage of allowing you to read data (without
blocking other readers) and update it later with the assurance that the data
has not changed since you last read it?. There?s nothing about covering
index being locked independently. Because of this I got confused.
Now I understand what is happening. I will stop to complain and rewrite my
queries to produce the correct behavior. I?m very pleased for your help and
your patience ignoring my brutality against the English language. I hope to
have the opportunity to return the help some day.
"Igor Raytsin" <n&i@.cyberus.ca> wrote in message
news:40457b40_1@.news.cybersurf.net...
> Hello Francisco,
> Let me express my thoughts about this situation. I still think that the
> problem lies on the logical level.
>
> 1. Do you have a business rule that says: "You have to have these two
> processes running in parallel."?
> Or it says: "You should have two independent processes."?
> Are they really independent or they 'quasi-independent'?
> If they are 'quasi-independent' or have to run in parallel, then you need
to
> implement some
> kind of messaging system to allow the processes to collaborate with each
> other.
> So, one could say to other: "You stay, I go." and then "You can go now."
> Maybe you need to create an extra table and put some flags in it.
> 2. If these processes are truly independent (which they are now),
> what would happen if the second process started before the first one?
> Will you get correct results? Is it allowable? Will it break any business
> rule?
> It seems like you want the second process wait till the first finish its
> part.
> Then they are 'quasi-independent'. See the note above.
> 3. Technical part.
> Quote from BOL: "Only one transaction caupdate n obtain an (U) lock to a
> resource at a time."
> And no one says: "Two selects with updlock on the same table cannot run at
> same
> time on separated transactions."
> That what is happening. One process gets Update lock on ix_buyprocess
> and another gets Update lock on pk_clientcash.
> If you want, you could try this:
> FROM clientbalance (UPDLOCK, TABLOCK)
> 4. Another quote from BOL: " Keep the transaction as short as possible."
> In the first script I would do this:
> UPDATE clientbalance
> SET cash = cash - 1000
> WHERE buyprocess = @.buyprocess
> AND ( SELECT sum(cash) from clientbalance where buyprocess =@.buyprocess)
> >= @.procamount
>
> 5. My English is not excellent either :)
>
> Hope it helps,
> Igor Raytsin
> P.S. And who is Arno?
>
> "Francisco D. Kurpiel" <francisco.kurpiel@.westbest.com.br> wrote in
message
> news:OkgBlwJAEHA.220@.TK2MSFTNGP09.phx.gbl...
> > Hi, Arno.
> >
> > The scripts that I sent to the list are just examples to illustrate a
> > problem on a database that I?m creating to some app. I?m writing
> procedures
> > that will access some critical data. Consistency is crucial for this
data,
> > because the data *is* money on some tables. I?m trying to use updlock to
> > avoid conversion dead lock, because I don?t have a retry mechanism. I
don?
> t
> > send the complete panorama to the list to not seem arrogant. Sorry... :)
> >
> > I have a paranoid contract, that doesn?t allow me to reveal business
rule,
> > so I have to illustrate. Let?s say I have a table that controls the
> balance
> > of some clients. This will be a big table. I have two processes that run
> in
> > parallel:
> >
> > First: Verifies if the sum of balance is big enough to complete to
> complete
> > the process. To complete the process I decrease the balance of the
clients
> > according to some rule. In the above example I subtract a fixed amount
> from
> > all clients. The clients are located using a field that is not on the
> > primary key. So an index was created with the cash of the client to be
> used
> > as a covering index (you will say: a-ha!). This is a query-update
process,
> > so I have to use updlock on the query to avoid conversion deadlock.
> >
> > Second: This is an independent process. This process may change the cash
> of
> > the client based on his id. This also is a query-update process, also
> needs
> > the updlock on the query to avoid conversion deadlock.
> >
> > The problem starts when I notice that SQL Server is not honoring the
> > updlocks, allowing both process acquire this lock on the same record. On
> the
> > example the client with id 6 is locked by two connections with updlocks.
> > This supposes to be impossible.
> >
> > Again I don?t want to seem arrogant, but I understand range locking. At
> > least the aspects that?s on the article you sent me. But this problem is
> > different: Two selects with updlock on the same table cannot run at same
> > time on separated transactions. Why they run?
> >
> > I hope this help you to understand my problem. Thank you again and sorry
> for
> > my poor English.
> >
> > /************** SCRIPT 1 **************/
> >
> > -- CREATING AND POPULATING TABLE
> >
> > set nocount on
> > go
> >
> > while @.@.trancount>0
> > rollback transaction
> > go
> >
> > if exists (select 1 from sysobjects where name='clientbalance')
> > drop table clientbalance
> > go
> >
> > create table clientbalance (
> > clientid bigint not null,
> > buyprocess int,
> > cash money not null,
> > anythingelse nvarchar(500) default N'not in any key',
> > constraint pk_clientcash primary key clustered (clientid)
> > )
> >
> > -- TO BE USED AS A COVERING INDEX ABOVE
> > create index ix_buyprocess on clientbalance (buyprocess, cash)
> >
> > go
> >
> > insert into clientbalance values (1, 1, 1000, default)
> > insert into clientbalance values (2, 1, 1000, default)
> > insert into clientbalance values (3, 1, 1000, default)
> > insert into clientbalance values (4, 1, 1000, default)
> > insert into clientbalance values (5, 5, 1000, default)
> > insert into clientbalance values (6, 5, 1000, default)
> > insert into clientbalance values (7, 5, 1000, default)
> > insert into clientbalance values (8, 5, 1000, default)
> > insert into clientbalance values (9, 1, 1000, default)
> > insert into clientbalance values (10, 1, 1000, default)
> > insert into clientbalance values (11, 1, 1000, default)
> > insert into clientbalance values (12, 1, 1000, default)
> > insert into clientbalance values (13, 1, 1000, default)
> > insert into clientbalance values (14, 1, 1000, default)
> >
> > go
> >
> > set transaction isolation level serializable
> > go
> >
> > -- OKAY, BEGINNING THE PROCESS
> >
> > begin transaction
> >
> > declare @.buyprocess int -- PROC PARAMETER
> > declare @.procamount money -- PROC PARAMETER
> > declare @.availableamount money -- select sum() result
> >
> > set @.procamount = 5000.00
> > set @.buyprocess = 5
> >
> > -- CALCULATING AVAILABLE AMOUNT using update lock
> >
> > set @.availableamount = (select sum(cash)
> > from clientbalance (updlock)
> > where buyprocess = @.buyprocess)
> >
> > declare @.dbname nvarchar(30)
> > set @.dbname = db_name()
> > execute sp_lock2 @.dbname, @.@.spid
> >
> > -- IF MONEY IS OKAY TO PROCESS
> > -- on this example the test is false
> >
> > if @.availableamount >= @.procamount begin
> > -- on this example this will never run
> > print 'cash okay.'
> > update clientbalance
> > set cash = cash - 1000 -- JUST AS EXAMPLE
> > where buyprocess = @.buyprocess
> > end else begin
> > print 'no money, no buying.'
> > -- rollback
> > -- procedure rollback things
> > end
> >
> > -- rollback transaction
> >
> >
> > -- KEEP TRANSACTION OPEN AND RUN THE SECOND SCRIPT.
> > -- IT SUPOSE TO LOCK BECAUSE OF THE CLIENTID 6,
> > -- THAT HAVE A UPDLOCK KEPT BY THIS TRANSACTION
> >
> > /************** SCRIPT 2 **************/
> > set transaction isolation level serializable
> >
> > begin transaction
> >
> > -- SELECTING DATA FROM TABLE USING OTHER INDEX
> >
> > select *
> > from clientbalance (updlock) -- MUST LOCK HERE
> > where clientid=6
> >
> > declare @.dbname nvarchar(30)
> > set @.dbname=db_name()
> > execute sp_lock2 @.dbname, @.@.spid
> >
> > -- BASED ON THE RESULT OF THIS SELECT THE PROCEDURE
> > -- WILL CHANGE THE CASH OF THE CLIENT.
> >
> > -- ...
> >
> > rollback transaction
> >
> >
> >
>|||Hi Francisco,
Neither "set transaction isolation level serializable" nor UPDLOCK
prevents other transaction from reading the data your transaction
has just read. Only Exclusive lock prevent access to the resource.
If you do not want to lock all table, you need create your own
'external' locking mechanism.
For example, in your script add this table:
CREATE TABLE my_locks (
id int not null,
table_name sysname not null,
spid int null)
GO
INSERT INTO my_locks VALUES (1, 'clientbalance', NULL)
Then in your both processes do like this:
set transaction isolation level serializable
go
begin transaction
UPDATE my_locks
SET spid = @.@.spid
WHERE table_name = 'clientbalance'
...
commit
Hope this helps,
Igor Raytsin
"Francisco D. Kurpiel" <francisco.kurpiel@.westbest.com.br> wrote in message
news:%23CRU$4SAEHA.212@.TK2MSFTNGP12.phx.gbl...
> Sorry, I?ve made a mistake... :)
> Well, both processes are online and user-connected. The app was not
> projected to say ?your request is being processed?. The application will
be
> accessed by many users at a time. This ?proceses? are resources of the
> application. I have to grant that they will not affect each other. I?m the
> DBA. I cannot say: ?do you have to change the application flow because of
> the SQL Server updlock implementation?.
> You?re right to say that my updlocks is locking different resources. Now
is
> clear to me that SQL Server?s select locking strategy is intent to grant
> some ACID properties, disallowing, for example, phantom reads on
> serializable mode. And the major use of updlock is to prevent deadlock on
> users that?s running the same process on the database (conversion
deadlock).
> If SQL Server chooses to use the same plan to all users the updlock will
> lock the same index and the users will be correctly serialized avoiding
> deadlocks. But SQL Server does not grant that the same procedure will use
> the same index, so updlock may be not honored. And if I have two independe
nt
> processes, using different indexes that may be serialized with updlock? I
> have to lock the entire table? And if the processes may run many times a
> second, while other users are trying to insert and update data? This hole
> thing because updlock does not lock the primary key if using covering
index?
> The BOL says: ?UPDLOCK: Use update locks instead of shared locks while
> reading a table, and hold locks until the end of the statement or
> transaction. UPDLOCK has the advantage of allowing you to read data
(without
> blocking other readers) and update it later with the assurance that the
data
> has not changed since you last read it?. There?s nothing about covering
> index being locked independently. Because of this I got confused.
> Now I understand what is happening. I will stop to complain and rewrite my
> queries to produce the correct behavior. I?m very pleased for your help
and
> your patience ignoring my brutality against the English language. I hope
to
> have the opportunity to return the help some day.
>
>|||> Neither "set transaction isolation level serializable" nor UPDLOCK
> prevents other transaction from reading the data your transaction
> has just read. Only Exclusive lock prevent access to the resource.
Exactly! Serializable transaction level grants me consistency (if the access
path is also consistent), not exclusivity. But, it may generate deadlocks. I
choose updlock and xlock just when the default locking may cause deadlock.
I'll explain what makes me post the first text to this list. I want to:
* Use shared locks to avoid phantom reads. If I read some data from the
client I need this data untouched until the end of my transaction. This
behavior is not an unnecessary luxury. We can simulate situations with our
business rules that will return inconsistent data to the client if phantom
reads is possible, even if we do not requery the same data. I will not put
an example at this post because you know locking very well :).
* Use updlock to avoid conversion dead lock. Any available documentation
about conversion dead lock that I remember says that I need to use updlock,
because: 1. it?s compatible with shared locks; 2: It?s not compatible with
other updlocks, serializing other occurrences of the same process, avoiding
conversion dead lock; 3: it?s not compatible with exclusive locks.
* Use exclusive locks in some specific circumstances. For example, when I
want to avoid cycle deadlocks. The recommended way to avoid this kind of
deadlock is to use defined ways to query the data. The way I have chosen is
not natural to some operations. In some cases the only way to avoid
deadlocks is xlocking resources at upper locking path and then making the
desired operation.
My work now is to write some stored procedures that just returns consistent
data, updates data in a consistent way and that do not produce dead locks.
My team listen beautiful histories about the unique features of the SQL
Server locking mechanism, which will help to archive this behavior. The goal
of updlock is to avoid conversion deadlocks, but the Microsoft?s
implementation of the updlocks does not grant the required serialization in
some circumstances. I have to use some scaring tricks that make SQL Server
to do what I want to do.
That?s my first deception with SQL Server. My trust that I can reach my goal
remains intact. I hope that some day Microsoft will change the locking query
hints to lock the used index *and* some fixed index (or rid, on heaps) to
really grant, for example, the exclusive nature of an xlock hint.
This discussion becomes more philosophical and less technical, and that?s
the designer?s work. For a moment, I just need to understand how it works,
and makes it work better as possible. Some day I will be responsible to
project apps at my company. And I will require that any technical promise
became reality, as I?m requiring from SQL Server a way to avoid conversion
dead locks :).
Thank you again, and good lock at your projects.
advanced question - 'xlock' hint on record not locking select * from table
from a table using the primary key on the where clause, inside a transaction
with serializable level. In another connection, I try to do a simple select
* from sametable (also using serializable transaction level) and this
command runs without locking, when it should wait for the exclusive lock
that is held by another connection to be released.
I wrote two scripts to simulate my problem. The first creates a table,
populate it, starts a connection and selects a record using the xlock hint
leaving the transaction and the locks open. Another query analyzer must be
opened (leaving this also open) to run the second script. This script do a
simple select * from tbl. This script returns all records, including the
records that is xlocked on the another connection.
SCRIPT 1
if exists (select 1 from sysobjects where name = 'bla')
drop table bla
go
create table bla (
id int not null,
nome nvarchar(256) not null,
constraint pk_bla primary key clustered (id),
constraint un_bla_nome unique (nome)
)
go
insert into bla values (1, 'Some thing')
insert into bla values (2, 'Some other thing')
go
set transaction isolation level serializable
begin transaction
select *
from bla (xlock, rowlock)
where id=1
execute sp_lock @.@.spid
--rollback transaction
SCRIPT 2 (must be run on another query analyzer window)
set transaction isolation level serializable
begin transaction
select *
from bla (holdlock, rowlock)
execute sp_lock @.@.spid
rollback transactionHi Francisco,
In your example:
1. You do not do any data modifications inside your transactions.
(You can add DBCC OPENTRAN before or after your EXECUTE sp_lock to
make sure that there is "No active open transactions." :-0 )
2. You are using ROWLOCK table hint.
3. You have a table with no column not covered by an index.
As result:
1. Your first script has exclusive lock only on a part of the PK (pk_bla)
2. For your second script, un_bla_nome index's leaf pages have
enough information to produce the desired resultset. SQL Server Optimizer
is smart
enough to noticed it and it chose to scan un_bla_nome index.
(See execution plan for your second script).
So these two transaction (scripts) do not interfere and you are getting
result without any delay.
If you want your second script to wait till first will finish you could:
- In your first script, replace ROWLOCK hint with TABLOCKX.
- OR add column to the table, not covered by the indexes.
- OR drop un_bla_nome unique constraint.
- OR do some data modification (For example: UPDATE bla SET id = id in
your second script)
Hope this helps,
Igor Raytsin
"Francisco D. Kurpiel" <francisco.kurpiel@.westbest.com.br> wrote in message
news:O8ShXmW$DHA.3668@.TK2MSFTNGP09.phx.gbl...
> I have a problem with locking on sql server 2000 sp3. I select one record
> from a table using the primary key on the where clause, inside a
transaction
> with serializable level. In another connection, I try to do a simple
select
> * from sametable (also using serializable transaction level) and this
> command runs without locking, when it should wait for the exclusive lock
> that is held by another connection to be released.
>
> I wrote two scripts to simulate my problem. The first creates a table,
> populate it, starts a connection and selects a record using the xlock hint
> leaving the transaction and the locks open. Another query analyzer must be
> opened (leaving this also open) to run the second script. This script do a
> simple select * from tbl. This script returns all records, including the
> records that is xlocked on the another connection.
>
> SCRIPT 1
> if exists (select 1 from sysobjects where name = 'bla')
> drop table bla
> go
>
> create table bla (
> id int not null,
> nome nvarchar(256) not null,
> constraint pk_bla primary key clustered (id),
> constraint un_bla_nome unique (nome)
> )
> go
>
> insert into bla values (1, 'Some thing')
> insert into bla values (2, 'Some other thing')
> go
>
> set transaction isolation level serializable
>
> begin transaction
>
> select *
> from bla (xlock, rowlock)
> where id=1
>
> execute sp_lock @.@.spid
>
> --rollback transaction
>
> SCRIPT 2 (must be run on another query analyzer window)
>
> set transaction isolation level serializable
>
> begin transaction
>
> select *
> from bla (holdlock, rowlock)
>
> execute sp_lock @.@.spid
>
> rollback transaction
>
>
>|||I appreciate your help.
Your message helps me to understand my problem. The first query locks one
index and the second query data using another index. I have made some
additional tests forcing the index on the second query to use the same index
witch is used by the first query and the locking happened as expected.
This behavior appears to be very strange to me. If some query locks the
table using some index, another query will ignore these locks if the
optimizer chooses another index? We know that the query plan is dependent on
the size of the tables evolved on the query. So, when my database grows, the
query analyzer may select other indexes to run the query, changing the
concurrency behavior? This is more like a bug than a programmer
responsibility.
I have analyzed your suggestions:
* Replace ROWLOCK hint with TABLOCKX
My database will get big and will be accessed by a lot of users. I cannot
use this solution.
* Add column to the table
My production tables have additional columns. But I have this locking
problem because I have indexes on some tables with specific objectives. I
have a where clause on some proc that will use one index and needs to lock
resources, and other queries that will access this tables using other
indexes. May I force all procs to use the same index using a
index=something hint. But indexes only will be useful they if they will be
used (sorry for the obvious statement). And if I use more than one index SQL
Server will not honor my locks
* Drop un_bla_nome unique constraint
As I explained on last item, this is not applicable.
* UPDATE bla SET id = id
Hum okay, you win. This works, it have xlocked all indexes and add an
intent lock on the page. But this is like a you know, a workaround.
I also tried to replace my rowlocks by paglocks, by the first script locks
one page and the second locks other page Im very confused.
Do you have some link to documentation that explains this entire think?
Thank you.
"Igor Raytsin" <n&i@.cyberus.ca> wrote in message
news:4042b47f_1@.news.cybersurf.net...
> Hi Francisco,
> In your example:
> 1. You do not do any data modifications inside your transactions.
> (You can add DBCC OPENTRAN before or after your EXECUTE sp_lock to
> make sure that there is "No active open transactions." :-0 )
> 2. You are using ROWLOCK table hint.
> 3. You have a table with no column not covered by an index.
> As result:
> 1. Your first script has exclusive lock only on a part of the PK (pk_bla)
> 2. For your second script, un_bla_nome index's leaf pages have
> enough information to produce the desired resultset. SQL Server Optimizer
> is smart
> enough to noticed it and it chose to scan un_bla_nome index.
> (See execution plan for your second script).
> So these two transaction (scripts) do not interfere and you are getting
> result without any delay.
>
> If you want your second script to wait till first will finish you could:
> - In your first script, replace ROWLOCK hint with TABLOCKX.
> - OR add column to the table, not covered by the indexes.
> - OR drop un_bla_nome unique constraint.
> - OR do some data modification (For example: UPDATE bla SET id = id in
> your second script)
>
> Hope this helps,
> Igor Raytsin
>
> "Francisco D. Kurpiel" <francisco.kurpiel@.westbest.com.br> wrote in
message
> news:O8ShXmW$DHA.3668@.TK2MSFTNGP09.phx.gbl...
record
> transaction
> select
hint
be
a
>|||Your scripts only simulate your problem. They do not conflict with each
other, and you don't like it, right?
Can you tell us what do you want to achieve? There could be another solution
already.
There is another 'workaround'. You could do this in your first script :
SELECT *
FROM bla (xlock, rowlock)
WHERE nome = (Select nome from bla where id=1)
But yes, SQL Server Optimizer might chose different execution plan when your
database grows.
I would recommend you to read BOL - 'Locking' section.
From 'Key-Range Locking' topic:
"Serializability requires that any query executed during a transaction must
obtain the same set of
rows if it is executed again at some later point within the same
transaction."
That is what happening with your scripts.
Igor Raytsin
"Francisco D. Kurpiel" <francisco.kurpiel@.westbest.com.br> wrote in message
news:%23K5S9$5$DHA.392@.TK2MSFTNGP12.phx.gbl...
> I appreciate your help.
> Your message helps me to understand my problem. The first query locks one
> index and the second query data using another index. I have made some
> additional tests forcing the index on the second query to use the same
index
> witch is used by the first query and the locking happened as expected.
> This behavior appears to be very strange to me. If some query locks the
> table using some index, another query will ignore these locks if the
> optimizer chooses another index? We know that the query plan is dependent
on
> the size of the tables evolved on the query. So, when my database grows,
the
> query analyzer may select other indexes to run the query, changing the
> concurrency behavior? This is more like a bug than a programmer
> responsibility.
> I have analyzed your suggestions:
> * Replace ROWLOCK hint with TABLOCKX
> My database will get big and will be accessed by a lot of users. I cannot
> use this solution.
> * Add column to the table
> My production tables have additional columns. But I have this locking
> problem because I have indexes on some tables with specific objectives. I
> have a where clause on some proc that will use one index and needs to lock
> resources, and other queries that will access this tables using other
> indexes. May I force all procs to use the same index using a
> index=something hint. But indexes only will be useful they if they will
be
> used (sorry for the obvious statement). And if I use more than one index
SQL
> Server will not honor my locks
> * Drop un_bla_nome unique constraint
> As I explained on last item, this is not applicable.
> * UPDATE bla SET id = id
> Hum okay, you win. This works, it have xlocked all indexes and add an
> intent lock on the page. But this is like a you know, a workaround.
> I also tried to replace my rowlocks by paglocks, by the first script locks
> one page and the second locks other page Im very confused.
> Do you have some link to documentation that explains this entire think?
> Thank you.
>
>|||Hi, Arno.
The scripts that I sent to the list are just examples to illustrate a
problem on a database that Im creating to some app. Im writing procedures
that will access some critical data. Consistency is crucial for this data,
because the data *is* money on some tables. Im trying to use updlock to
avoid conversion dead lock, because I dont have a retry mechanism. I dont
send the complete panorama to the list to not seem arrogant. Sorry...
I have a paranoid contract, that doesnt allow me to reveal business rule,
so I have to illustrate. Lets say I have a table that controls the balance
of some clients. This will be a big table. I have two processes that run in
parallel:
First: Verifies if the sum of balance is big enough to complete to complete
the process. To complete the process I decrease the balance of the clients
according to some rule. In the above example I subtract a fixed amount from
all clients. The clients are located using a field that is not on the
primary key. So an index was created with the cash of the client to be used
as a covering index (you will say: a-ha!). This is a query-update process,
so I have to use updlock on the query to avoid conversion deadlock.
Second: This is an independent process. This process may change the cash of
the client based on his id. This also is a query-update process, also needs
the updlock on the query to avoid conversion deadlock.
The problem starts when I notice that SQL Server is not honoring the
updlocks, allowing both process acquire this lock on the same record. On the
example the client with id 6 is locked by two connections with updlocks.
This supposes to be impossible.
Again I dont want to seem arrogant, but I understand range locking. At
least the aspects thats on the article you sent me. But this problem is
different: Two selects with updlock on the same table cannot run at same
time on separated transactions. Why they run?
I hope this help you to understand my problem. Thank you again and sorry for
my poor English.
/************** SCRIPT 1 **************/
-- CREATING AND POPULATING TABLE
set nocount on
go
while @.@.trancount>0
rollback transaction
go
if exists (select 1 from sysobjects where name='clientbalance')
drop table clientbalance
go
create table clientbalance (
clientid bigint not null,
buyprocess int,
cash money not null,
anythingelse nvarchar(500) default N'not in any key',
constraint pk_clientcash primary key clustered (clientid)
)
-- TO BE USED AS A COVERING INDEX ABOVE
create index ix_buyprocess on clientbalance (buyprocess, cash)
go
insert into clientbalance values (1, 1, 1000, default)
insert into clientbalance values (2, 1, 1000, default)
insert into clientbalance values (3, 1, 1000, default)
insert into clientbalance values (4, 1, 1000, default)
insert into clientbalance values (5, 5, 1000, default)
insert into clientbalance values (6, 5, 1000, default)
insert into clientbalance values (7, 5, 1000, default)
insert into clientbalance values (8, 5, 1000, default)
insert into clientbalance values (9, 1, 1000, default)
insert into clientbalance values (10, 1, 1000, default)
insert into clientbalance values (11, 1, 1000, default)
insert into clientbalance values (12, 1, 1000, default)
insert into clientbalance values (13, 1, 1000, default)
insert into clientbalance values (14, 1, 1000, default)
go
set transaction isolation level serializable
go
-- OKAY, BEGINNING THE PROCESS
begin transaction
declare @.buyprocess int -- PROC PARAMETER
declare @.procamount money -- PROC PARAMETER
declare @.availableamount money -- select sum() result
set @.procamount = 5000.00
set @.buyprocess = 5
-- CALCULATING AVAILABLE AMOUNT using update lock
set @.availableamount = (select sum(cash)
from clientbalance (updlock)
where buyprocess = @.buyprocess)
declare @.dbname nvarchar(30)
set @.dbname = db_name()
execute sp_lock2 @.dbname, @.@.spid
-- IF MONEY IS OKAY TO PROCESS
-- on this example the test is false
if @.availableamount >= @.procamount begin
-- on this example this will never run
print 'cash okay.'
update clientbalance
set cash = cash - 1000 -- JUST AS EXAMPLE
where buyprocess = @.buyprocess
end else begin
print 'no money, no buying.'
-- rollback
-- procedure rollback things
end
-- rollback transaction
-- KEEP TRANSACTION OPEN AND RUN THE SECOND SCRIPT.
-- IT SUPOSE TO LOCK BECAUSE OF THE CLIENTID 6,
-- THAT HAVE A UPDLOCK KEPT BY THIS TRANSACTION
/************** SCRIPT 2 **************/
set transaction isolation level serializable
begin transaction
-- SELECTING DATA FROM TABLE USING OTHER INDEX
select *
from clientbalance (updlock) -- MUST LOCK HERE
where clientid=6
declare @.dbname nvarchar(30)
set @.dbname=db_name()
execute sp_lock2 @.dbname, @.@.spid
-- BASED ON THE RESULT OF THIS SELECT THE PROCEDURE
-- WILL CHANGE THE CASH OF THE CLIENT.
-- ...
rollback transaction|||Hello Francisco,
Let me express my thoughts about this situation. I still think that the
problem lies on the logical level.
1. Do you have a business rule that says: "You have to have these two
processes running in parallel."?
Or it says: "You should have two independent processes."?
Are they really independent or they 'quasi-independent'?
If they are 'quasi-independent' or have to run in parallel, then you need to
implement some
kind of messaging system to allow the processes to collaborate with each
other.
So, one could say to other: "You stay, I go." and then "You can go now."
Maybe you need to create an extra table and put some flags in it.
2. If these processes are truly independent (which they are now),
what would happen if the second process started before the first one?
Will you get correct results? Is it allowable? Will it break any business
rule?
It seems like you want the second process wait till the first finish its
part.
Then they are 'quasi-independent'. See the note above.
3. Technical part.
Quote from BOL: "Only one transaction can obtain an update (U) lock to a
resource at a time."
And no one says: "Two selects with updlock on the same table cannot run at
same
time on separated transactions."
That what is happening. One process gets Update lock on ix_buyprocess
and another gets Update lock on pk_clientcash.
If you want, you could try this:
FROM clientbalance (UPDLOCK, TABLOCK)
4. Another quote from BOL: " Keep the transaction as short as possible."
In the first script I would do this:
UPDATE clientbalance
SET cash = cash - 1000
WHERE buyprocess = @.buyprocess
AND ( SELECT sum(cash) from clientbalance where buyprocess = @.buyprocess)
>= @.procamount
5. My English is not excellent either
Hope it helps,
Igor Raytsin
P.S. And who is Arno?
"Francisco D. Kurpiel" <francisco.kurpiel@.westbest.com.br> wrote in message
news:OkgBlwJAEHA.220@.TK2MSFTNGP09.phx.gbl...
> Hi, Arno.
> The scripts that I sent to the list are just examples to illustrate a
> problem on a database that Im creating to some app. Im writing
procedures
> that will access some critical data. Consistency is crucial for this data,
> because the data *is* money on some tables. Im trying to use updlock to
> avoid conversion dead lock, because I dont have a retry mechanism. I don
t
> send the complete panorama to the list to not seem arrogant. Sorry...
> I have a paranoid contract, that doesnt allow me to reveal business rule,
> so I have to illustrate. Lets say I have a table that controls the
balance
> of some clients. This will be a big table. I have two processes that run
in
> parallel:
> First: Verifies if the sum of balance is big enough to complete to
complete
> the process. To complete the process I decrease the balance of the clients
> according to some rule. In the above example I subtract a fixed amount
from
> all clients. The clients are located using a field that is not on the
> primary key. So an index was created with the cash of the client to be
used
> as a covering index (you will say: a-ha!). This is a query-update process,
> so I have to use updlock on the query to avoid conversion deadlock.
> Second: This is an independent process. This process may change the cash
of
> the client based on his id. This also is a query-update process, also
needs
> the updlock on the query to avoid conversion deadlock.
> The problem starts when I notice that SQL Server is not honoring the
> updlocks, allowing both process acquire this lock on the same record. On
the
> example the client with id 6 is locked by two connections with updlocks.
> This supposes to be impossible.
> Again I dont want to seem arrogant, but I understand range locking. At
> least the aspects thats on the article you sent me. But this problem is
> different: Two selects with updlock on the same table cannot run at same
> time on separated transactions. Why they run?
> I hope this help you to understand my problem. Thank you again and sorry
for
> my poor English.
> /************** SCRIPT 1 **************/
> -- CREATING AND POPULATING TABLE
> set nocount on
> go
> while @.@.trancount>0
> rollback transaction
> go
> if exists (select 1 from sysobjects where name='clientbalance')
> drop table clientbalance
> go
> create table clientbalance (
> clientid bigint not null,
> buyprocess int,
> cash money not null,
> anythingelse nvarchar(500) default N'not in any key',
> constraint pk_clientcash primary key clustered (clientid)
> )
> -- TO BE USED AS A COVERING INDEX ABOVE
> create index ix_buyprocess on clientbalance (buyprocess, cash)
> go
> insert into clientbalance values (1, 1, 1000, default)
> insert into clientbalance values (2, 1, 1000, default)
> insert into clientbalance values (3, 1, 1000, default)
> insert into clientbalance values (4, 1, 1000, default)
> insert into clientbalance values (5, 5, 1000, default)
> insert into clientbalance values (6, 5, 1000, default)
> insert into clientbalance values (7, 5, 1000, default)
> insert into clientbalance values (8, 5, 1000, default)
> insert into clientbalance values (9, 1, 1000, default)
> insert into clientbalance values (10, 1, 1000, default)
> insert into clientbalance values (11, 1, 1000, default)
> insert into clientbalance values (12, 1, 1000, default)
> insert into clientbalance values (13, 1, 1000, default)
> insert into clientbalance values (14, 1, 1000, default)
> go
> set transaction isolation level serializable
> go
> -- OKAY, BEGINNING THE PROCESS
> begin transaction
> declare @.buyprocess int -- PROC PARAMETER
> declare @.procamount money -- PROC PARAMETER
> declare @.availableamount money -- select sum() result
> set @.procamount = 5000.00
> set @.buyprocess = 5
> -- CALCULATING AVAILABLE AMOUNT using update lock
> set @.availableamount = (select sum(cash)
> from clientbalance (updlock)
> where buyprocess = @.buyprocess)
> declare @.dbname nvarchar(30)
> set @.dbname = db_name()
> execute sp_lock2 @.dbname, @.@.spid
> -- IF MONEY IS OKAY TO PROCESS
> -- on this example the test is false
> if @.availableamount >= @.procamount begin
> -- on this example this will never run
> print 'cash okay.'
> update clientbalance
> set cash = cash - 1000 -- JUST AS EXAMPLE
> where buyprocess = @.buyprocess
> end else begin
> print 'no money, no buying.'
> -- rollback
> -- procedure rollback things
> end
> -- rollback transaction
>
> -- KEEP TRANSACTION OPEN AND RUN THE SECOND SCRIPT.
> -- IT SUPOSE TO LOCK BECAUSE OF THE CLIENTID 6,
> -- THAT HAVE A UPDLOCK KEPT BY THIS TRANSACTION
> /************** SCRIPT 2 **************/
> set transaction isolation level serializable
> begin transaction
> -- SELECTING DATA FROM TABLE USING OTHER INDEX
> select *
> from clientbalance (updlock) -- MUST LOCK HERE
> where clientid=6
> declare @.dbname nvarchar(30)
> set @.dbname=db_name()
> execute sp_lock2 @.dbname, @.@.spid
> -- BASED ON THE RESULT OF THIS SELECT THE PROCEDURE
> -- WILL CHANGE THE CASH OF THE CLIENT.
> -- ...
> rollback transaction
>
>|||Sorry, Ive made a mistake...
Well, both processes are online and user-connected. The app was not
projected to say your request is being processed. The application will be
accessed by many users at a time. This proceses are resources of the
application. I have to grant that they will not affect each other. Im the
DBA. I cannot say: do you have to change the application flow because of
the SQL Server updlock implementation.
Youre right to say that my updlocks is locking different resources. Now is
clear to me that SQL Servers select locking strategy is intent to grant
some ACID properties, disallowing, for example, phantom reads on
serializable mode. And the major use of updlock is to prevent deadlock on
users thats running the same process on the database (conversion deadlock).
If SQL Server chooses to use the same plan to all users the updlock will
lock the same index and the users will be correctly serialized avoiding
deadlocks. But SQL Server does not grant that the same procedure will use
the same index, so updlock may be not honored. And if I have two independent
processes, using different indexes that may be serialized with updlock? I
have to lock the entire table? And if the processes may run many times a
second, while other users are trying to insert and update data? This hole
thing because updlock does not lock the primary key if using covering index?
The BOL says: UPDLOCK: Use update locks instead of shared locks while
reading a table, and hold locks until the end of the statement or
transaction. UPDLOCK has the advantage of allowing you to read data (without
blocking other readers) and update it later with the assurance that the data
has not changed since you last read it. Theres nothing about covering
index being locked independently. Because of this I got confused.
Now I understand what is happening. I will stop to complain and rewrite my
queries to produce the correct behavior. Im very pleased for your help and
your patience ignoring my brutality against the English language. I hope to
have the opportunity to return the help some day.
"Igor Raytsin" <n&i@.cyberus.ca> wrote in message
news:40457b40_1@.news.cybersurf.net...
> Hello Francisco,
> Let me express my thoughts about this situation. I still think that the
> problem lies on the logical level.
>
> 1. Do you have a business rule that says: "You have to have these two
> processes running in parallel."?
> Or it says: "You should have two independent processes."?
> Are they really independent or they 'quasi-independent'?
> If they are 'quasi-independent' or have to run in parallel, then you need
to
> implement some
> kind of messaging system to allow the processes to collaborate with each
> other.
> So, one could say to other: "You stay, I go." and then "You can go now."
> Maybe you need to create an extra table and put some flags in it.
> 2. If these processes are truly independent (which they are now),
> what would happen if the second process started before the first one?
> Will you get correct results? Is it allowable? Will it break any business
> rule?
> It seems like you want the second process wait till the first finish its
> part.
> Then they are 'quasi-independent'. See the note above.
> 3. Technical part.
> Quote from BOL: "Only one transaction caupdate n obtain an (U) lock to a
> resource at a time."
> And no one says: "Two selects with updlock on the same table cannot run at
> same
> time on separated transactions."
> That what is happening. One process gets Update lock on ix_buyprocess
> and another gets Update lock on pk_clientcash.
> If you want, you could try this:
> FROM clientbalance (UPDLOCK, TABLOCK)
> 4. Another quote from BOL: " Keep the transaction as short as possible."
> In the first script I would do this:
> UPDATE clientbalance
> SET cash = cash - 1000
> WHERE buyprocess = @.buyprocess
> AND ( SELECT sum(cash) from clientbalance where buyprocess =
@.buyprocess)
>
> 5. My English is not excellent either
>
> Hope it helps,
> Igor Raytsin
> P.S. And who is Arno?
>
> "Francisco D. Kurpiel" <francisco.kurpiel@.westbest.com.br> wrote in
message
> news:OkgBlwJAEHA.220@.TK2MSFTNGP09.phx.gbl...
> procedures
data,
don
> t
rule,
> balance
> in
> complete
clients
> from
> used
process,
> of
> needs
> the
> for
>|||Hi Francisco,
Neither "set transaction isolation level serializable" nor UPDLOCK
prevents other transaction from reading the data your transaction
has just read. Only Exclusive lock prevent access to the resource.
If you do not want to lock all table, you need create your own
'external' locking mechanism.
For example, in your script add this table:
CREATE TABLE my_locks (
id int not null,
table_name sysname not null,
spid int null)
GO
INSERT INTO my_locks VALUES (1, 'clientbalance', NULL)
Then in your both processes do like this:
set transaction isolation level serializable
go
begin transaction
UPDATE my_locks
SET spid = @.@.spid
WHERE table_name = 'clientbalance'
...
commit
Hope this helps,
Igor Raytsin
"Francisco D. Kurpiel" <francisco.kurpiel@.westbest.com.br> wrote in message
news:%23CRU$4SAEHA.212@.TK2MSFTNGP12.phx.gbl...
> Sorry, Ive made a mistake...
> Well, both processes are online and user-connected. The app was not
> projected to say your request is being processed. The application will
be
> accessed by many users at a time. This proceses are resources of the
> application. I have to grant that they will not affect each other. Im the
> DBA. I cannot say: do you have to change the application flow because of
> the SQL Server updlock implementation.
> Youre right to say that my updlocks is locking different resources. Now
is
> clear to me that SQL Servers select locking strategy is intent to grant
> some ACID properties, disallowing, for example, phantom reads on
> serializable mode. And the major use of updlock is to prevent deadlock on
> users thats running the same process on the database (conversion
deadlock).
> If SQL Server chooses to use the same plan to all users the updlock will
> lock the same index and the users will be correctly serialized avoiding
> deadlocks. But SQL Server does not grant that the same procedure will use
> the same index, so updlock may be not honored. And if I have two independe
nt
> processes, using different indexes that may be serialized with updlock? I
> have to lock the entire table? And if the processes may run many times a
> second, while other users are trying to insert and update data? This hole
> thing because updlock does not lock the primary key if using covering
index?
> The BOL says: UPDLOCK: Use update locks instead of shared locks while
> reading a table, and hold locks until the end of the statement or
> transaction. UPDLOCK has the advantage of allowing you to read data
(without
> blocking other readers) and update it later with the assurance that the
data
> has not changed since you last read it. Theres nothing about covering
> index being locked independently. Because of this I got confused.
> Now I understand what is happening. I will stop to complain and rewrite my
> queries to produce the correct behavior. Im very pleased for your help
and
> your patience ignoring my brutality against the English language. I hope
to
> have the opportunity to return the help some day.
>
>|||> Neither "set transaction isolation level serializable" nor UPDLOCK
> prevents other transaction from reading the data your transaction
> has just read. Only Exclusive lock prevent access to the resource.
Exactly! Serializable transaction level grants me consistency (if the access
path is also consistent), not exclusivity. But, it may generate deadlocks. I
choose updlock and xlock just when the default locking may cause deadlock.
I'll explain what makes me post the first text to this list. I want to:
* Use shared locks to avoid phantom reads. If I read some data from the
client I need this data untouched until the end of my transaction. This
behavior is not an unnecessary luxury. We can simulate situations with our
business rules that will return inconsistent data to the client if phantom
reads is possible, even if we do not requery the same data. I will not put
an example at this post because you know locking very well
* Use updlock to avoid conversion dead lock. Any available documentation
about conversion dead lock that I remember says that I need to use updlock,
because: 1. its compatible with shared locks; 2: Its not compatible with
other updlocks, serializing other occurrences of the same process, avoiding
conversion dead lock; 3: its not compatible with exclusive locks.
* Use exclusive locks in some specific circumstances. For example, when I
want to avoid cycle deadlocks. The recommended way to avoid this kind of
deadlock is to use defined ways to query the data. The way I have chosen is
not natural to some operations. In some cases the only way to avoid
deadlocks is xlocking resources at upper locking path and then making the
desired operation.
My work now is to write some stored procedures that just returns consistent
data, updates data in a consistent way and that do not produce dead locks.
My team listen beautiful histories about the unique features of the SQL
Server locking mechanism, which will help to archive this behavior. The goal
of updlock is to avoid conversion deadlocks, but the Microsofts
implementation of the updlocks does not grant the required serialization in
some circumstances. I have to use some scaring tricks that make SQL Server
to do what I want to do.
Thats my first deception with SQL Server. My trust that I can reach my goal
remains intact. I hope that some day Microsoft will change the locking query
hints to lock the used index *and* some fixed index (or rid, on heaps) to
really grant, for example, the exclusive nature of an xlock hint.
This discussion becomes more philosophical and less technical, and thats
the designers work. For a moment, I just need to understand how it works,
and makes it work better as possible. Some day I will be responsible to
project apps at my company. And I will require that any technical promise
became reality, as Im requiring from SQL Server a way to avoid conversion
dead locks
Thank you again, and good lock at your projects.
Sunday, March 11, 2012
Advance Update Statement Sql Server
statements.
Example:
update orders set shipname = (select contactName from
customers where customerid = orders.customerID)
I read some articles which said that I should be able to use an inner
join on the update statement like the following:
update orders set shipname = (select contactName from customers where
customerid = orders.customerID)
But every time that I run this statement I get the follwing error:
Server: Msg 156, Level 15, State 1, Line 1
Incorrect syntax near the keyword 'inner'.
Any Help will be greatly appreciated.
Thank you.I think you posted the wrong UPDATE statement. Both those statements
are identical and valid syntax.
However, if performance is your concern then why not take SHIPNAME out
of the Orders table. It looks like it's redundant there.
--
David Portas
SQL Server MVP
--|||See "Changing Data Using the FROM Clause" and also example C under
UPDATE in Books Online (although it would be better to rewrite it with
INNER JOIN).
Simon|||Both statements were the same ?! I think I know what you meant to say
...
As long as the scalar query expression returns zero or one row, you
will be fine. If you use the proprietary FROM syntax, you will get an
unpredictable result from a multi-row result set.
The real cost of an update is in the physical disk access, not the
code.|||HeadScratcher (mayur@.servicemg.com) writes:
> I am trying to speed up my update statements by removing inner select
> statements.
> Example:
> update orders set shipname = (select contactName from
> customers where customerid = orders.customerID)
> I read some articles which said that I should be able to use an inner
> join on the update statement like the following:
> update orders set shipname = (select contactName from customers where
> customerid = orders.customerID)
> But every time that I run this statement I get the follwing error:
> Server: Msg 156, Level 15, State 1, Line 1
> Incorrect syntax near the keyword 'inner'.
Apparently there was some glitch in the editing. Anyway, this is what
you want:
UPDATE Orders
SET ShipName = c.ContactName
FROM Orders o
JOIN Customers c ON c.CustomerID = O.CustomerID
I suspect the problem is that you left out the FROM clause.
I left out INNER here, because this is implied.
I should add your original syntax is in alignment with ANSI standards,
whereas the syntax with FROM JOIN is proprietary to MS SQL Server and
Sybase (and possibly Informix). If you need portability, stick to the
original syntax. As long as you work with SQL Server only, do as you
please. Personally, I find the FROM/JOIN syntax very pleasant, as it
builds on the same paradigm as a regular SELECT statement. It is also
more effecient, if you need to update more than one column.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||David Portas (REMOVE_BEFORE_REPLYING_dportas@.acm.org) writes:
> However, if performance is your concern then why not take SHIPNAME out
> of the Orders table. It looks like it's redundant there.
Nah, I would not recommend people to drop columns from their
Northwind databases. :-)
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||>> I find the FROM/JOIN syntax very pleasant, as it builds on the same paradigm as a regular SELECT statement.<<
Arrrgh! The **consistent meaning in the Standard SQL model** of a
FROM clause is that a temporary working table is constructed, used and
dropped at the end of the statement. You would be updating a temporary
working table, not the base table.
The Sybase, Informix and MS SQL Server syntax might look the same, but
the semantics are all slightly different when you get to a 1:m
relationship. Moving the code is deadly -- it moves over to the next
platform and runs differently. With the ANSI syntax, the vendors have
to follow the same rules. This is a good thing.