Showing posts with label app. Show all posts
Showing posts with label app. Show all posts

Tuesday, March 27, 2012

Advice on developing a mobile app that uses SQL 2005 CE

Hi,

I am new to SQL 2005 CE but fairly experienced with SQL 2000 and know my way around SQL 2005. I am creating a PDA app that must support the following scenario:

My company has a SQL 2000 server with a "computer assets" database. This asset database must be able to be transferred to an .sdf file (SQL 2005 mobile) on the PDA when the user docks their PDA to the desktop PC (not the same machine as the SQL 2000 server). This transfer can be via an option in the mobile app that is initiated by the user, or automatic.

The mobile application basically supports stock-taking of the assets, so users go out on site and record information about the assets. This information is stored in a separate .sdf database on the PDA.

When the user gets back to the office they dock with their desktop and then must transfer the data from the second .sdf database onto the SQL 200 server. Lets call this the "asset metadata".

thats basically it! but im having issues finding the right process to attack the data transfer tasks.

My issues:

- Firstly, how do i populate the .sdf on the PDA for development purposes? Microsoft tutorials seem to be geared at SQL server 2005 management

studio only. I only have SQL server 2005 Management Studio Express on

my development machine as well as all the necessary SQL 2005 CE SDKs and VS2005. We

do not use SQL Server 2005 at my organisation as yet and it will be some time before we migrate across. In effect im looking for something that behaves like a DTS package to transfer data across.

- How to populate the "asset database" .sdf file from the SQL 2000 server when the user has docked to their desktop PC? (This needs to be refreshed )

- How to transfer the "asset metadata" from the .sdf on the PDA to the SQL server?

ive looked at merge replication a little but am not sure whether this is what i need and again most online information seem to be tailored to SQL Server 2005.

Can anyone kindly point me in the right direction?

I believe that merge replication will be the solution to your problems, as it will allow you to: Populate the device SDF, but for development and production purposes, and transfer inserted data from the device to the central SQL Server.

I would strongly recommend upgrading to SQL Server 2005, but SQL Server 2000 is a supported platform for merge synchronization with SQL 2005 CE.

The topics below this covers both SQL Server 2000 and 2005 configurations: http://msdn2.microsoft.com/en-us/library/ms171850.aspx

|||Hi Erik,

Thanks for your post. This article does indeed look like what i was searching for. ive been a bit busy on other work though and unable to dive into it thoroughly.

i will update once i have time to get back to the SQL mobile project again.|||


Hello again,


ive managed to setup everything as follows:

- created the publication for merge replication on the sql 2000 server.
- allowed the publication to have sql server CE subscrptions
- snapshot was created fine (at least sql server said everything was successful/)
- set up IIS for replication and this also went well. IIS is running
on the same physical server as SQL 2000

So i then added some code to the applicaiton on my pocket PC, and it seems to be working except that the database on the local device doesnt seem to get any data added to it.

The following VB.Net code is what i am using (i have replaced server names etc with arbitrary values enclosed in < > for privacy reasons)

Try
' Set the Replication object.
repl = New SqlCeReplication()
repl.InternetUrl = "http://<myserver>/AssetSync/sqlcesa30.dll"
repl.Publisher = "<publisher name>"
repl.PublisherDatabase = "AssetRegister"
repl.PublisherLogin = "<login>"
repl.PublisherPassword = "<password>"
repl.Publication = "asset_list"
repl.SubscriberConnectionString = "Data Source=\Assets.sdf"
repl.Subscriber = "MySubscriber"

' Create the Local SSCE Database subscription.
repl.AddSubscription(AddOption.ExistingDatabase)

' Synchronize to the instance of SQL Server 2000 to populate the Subscription.
repl.Synchronize()

Catch ex As SqlCeException
' Use your own error handling routine to show error information.
outputLabel.Text = ex.ToString

Finally
outputLabel.Text = "Subscription Complete!"
' Dispose of the Replication object.
repl.Dispose()
End Try

There are no erros thrown and my application actually outputs "Subscription Complete!" .. but the database is not being populated. My VS2005 project has a blank Assets.sdf file in it.

i have also tried starting the app with no .sdf file and changing the subscription code to "repl.AddSubscription(AddOption.CreateDatabase)"

i am thinking there might be something wrong with the publication? but this behvaes the same - no database created.
|||

I would start by testing the publication from a desktop, namely via SQL Server 2005 management studio, or using this utility from Nick Randolph: http://www.softteq.com/tools/publish.htm

advice normalization

I have a web app which is used to do normal insert/update of employee info. Connected to each employee that is entered is some data that is imported from an outside source for each employee. The question I have is currently my database is very normalized and importing data from this outside source will be quite a pain because of this. Is it bad practice to denormalize a specific table if no user will every insert/update it beside DTS?

Why do you feel that normalization will make data importation more difficult?
Normalization should make the process somewhat easier -- it helpsguarantee data integrity, so you won't have to put excessive amounts ofvalidation code into your importation routines. I would notrecommend denormalizing to make this process "easier". Dataintegrity is the most important factor, and is the reason we usedatabases instead of flat file systems!
Can you post some more information about your schema and the data you're trying to import?

|||No Denormalization is used in OLPT(online transaction processing) to improve performance. Run a search for Composite index in SQL Server BOL(books online). A Composite index can take 16 Columns with a size limit of 900 bytes. Hope this helps.|||No. OLTP databases should NEVER be denormalized when dataintegrity is important! Use of denormalization for performanceshould be limited to OLAP and reporting scenarios in which the data canbe re-built from a trusted source. I would never riskdenormalizing a transactional database for the sake ofperformance. Even the best performance will be utterly useless iftransactions are lost due to data integrity issues.
I'm unsure of what a composite index has to do with this topic, but thanks for throwing it out there.

|||OLTP(online transaction processing) is the general term for none analytical use of the database. Denormalization in SQL Server 2000 and below can be achieved through a Composite Index. A Composite index is a way of life in SQL Server 7.0 and below without built in DRI(declarative referential integrity) And the table in question is being used by a process not people so the access can be scheduled.|||I'm not sure how you've arrived at the conclusion that creating acomposite index is akin to denormalizing a table. Given thatindexes are physical, not logical parts of the schema, they are notconsidered in discussions of normalization. I'd recommend thatyou readthis, and other articles on the Database Debunkings website to dispell that notion.
But even if that were the case, it would not apply to thissituation. The OP is concerned that data importation is going tobe difficult due to the normalization of the database. But as Iassume you're aware, DML operations cannot be performed onindexes. Only on tables (and indirectly on tables via views andUDFs). Furthermore, large composite indexes will slow downimportation -- not speed it up. Therefore, creating large indexeswill serve to hinder -- not help -- the OP's importation efforts.

|||

AdamMachanic wrote:

I'm not sure how you've arrived at the conclusion that creating a composite index is akin to denormalizing a table. Given that indexes are physical, not logical parts of the schema, they are not considered in discussions of normalization.



Indexes are not considered because indexes are from vendors and Normalization is ANSI SQL but there are so many vendor implementations that makes the database better. In SQL Server 2005 it is taken to the next level with index column include. Try the link below for index covering used by Microsoft consulting and is considered good practice by the industry. While you are at it check the appendix of the book below to see why E.F.Code called it normalization. If it is not in the PPT online it became Normalization because Nixon have normalized relationship with China.

http://www.sql-server-performance.com/covering_indexes.asp

http://wings.buffalo.edu/mgmt/courses/mgtsand/data.html

|||It doesn't matter if something is a vendor extension or defined by theANSI Standard -- SQL and the ANSI Standard have absolutely nothing todo with normalization. As a matter of fact, I just searched thecomplete text of the SQL-99 standard and a form of the word 'normalize'appears only once -- in reference to normalization of datetimedatatypes!
What the Relational Model is concerned with is the distinction betweenthe logical and the physical. Tables are logical -- and thereforesubject to the rules of logical normalization. Indexes arephysical -- they have no bearing upon the logical schema.

|||I have ANSI 2003 and the person will take what meets their existing needs.|||

Caddre wrote:

I have ANSI 2003 and the person will take what meets their existing needs.


Where is normalization defined in the SQL 2003 standard?

|||We will not be having this conversion if you have used SQL Server 6.5 and below when SQL Server and Normalization are not in one sentence. What is Normalization when queries are taking hours to complete.|||What difficulties did you have with normalization in SQL Server 6.5?

Sunday, March 25, 2012

Advice MSDE/ACCESS

I wrote an app that uses MSDE. IT is VB.NET, and there are only 10 users
tops.
To make an MSDE installation and get the database working requires me, which
makes it not really a commercially distributable product.
Would I be better just using an MDB file? Seems like MSDE is really nice
and basically SQL but I need something that installs easy--i.e. my users can
install. I know I can easily install an MDB file.
Would the MDB file be sufficient for 10-20 users tops?
thanks,
Shane
On Sat, 12 Jun 2004 02:36:37 -0500, SStory
<TheStorys@.TAKEOUTTHISSPAMBUSTERsofthome.net> wrote:

> Would the MDB file be sufficient for 10-20 users tops?
That depends on the complexity of your queries and the level of your
requirements. If you're only doing some SELECT statements and occasional
updates/inserts, an .mdb file is certainly enough. You could run into
troubles with locking mechanisms in MS Access though, so take care.
Installing MSDE is not a big deal, you just have to care for populating
the database yourself (ie: from your application) to make it easier for
your users.
Using SECURITYMODE and DBPWD(?) as setup parameters can even set up a
version of MSDE that runs similar to MSDE 1.0 (speaking of access to it)
Stefan
Give a man fire and he is warm for a day.
Set him on fire and he is warm for the rest of his life.

advice for storing reservation/booking date ranges...

I am building a small app that will display availability data for properties. I have a calendar that displays the dates for a whole year in month rows with each days colour representing the availability status, i.e. booked, on hold etc.

My question is about how to store these dates in the db. At the moment I have the table below:

TABLE Availability
[PropertyID] [int] NOT NULL ,
[StatusID] [tinyint] NOT NULL ,
[StartDate] [smalldatetime] NOT NULL ,
[EndDate] [smalldatetime] NOT NULL

I was planning on having four status's for any given date, unknown, available, on hold or booked.

Displaying the dates has proved pretty simple but updating availability means I would need to query the db to see if any of the dates overlapped, I would then have to add the new date range/status as well as change any date ranges that overlapped, either in the sp or in the code and this is what made me wonder if there was a better way.

Does this sound a reasonable approach? Any advice or pointers would be greatly appreciated, This is the first time I have had to store date ranges and I want to make sure I am doing it right.

Instead of an Availability table, I would think you'd want a Booked table. Assume the property is available and create records in the Booked table when it's booked or held. Don't create records if you'd have an overlap by doing so. If no record for a time frame, then it's available.

It seems the way you're doing it, you would have to create a record for every date for every property. That makes it tough to be more granular if you need to go to the hour or minute level.

|||

Yes, unfortunately I have to cater for users who are not keeping the calendar uptodate hence using both the unknown and available status.

The default will be unknown, so anything not in the table will be that status and then it is up to users to use the calendar to update the properties availability, so there are three status's, available, on-hold and booked. These three will be in the table.

I think I will proceed as is and see how I go...thanks for the input.

Friday, February 24, 2012

ADO.NET and SQL Server Data Paging

I had to built a custom data paging control for my asp.net app and SQL
Server.
This is what I'm doing:
1. do a SELECT to a SqlDataReader
2. create a DataTable with data from GetSchema()
3. loop trough the datareader using .Read() to the point I want to start
4. add data to DataTable with max of [pageSize] records
5. close the data reader and return
The question is:
Isn't looping reader.Read() doing nothing to go to start point where I
want start getting data too expensive? Although DataReader is fast,
won't ".Read()" get the row data from the server to the client? If so,
if I need to get data from record 10000 ahead, it is actually reading
all 10 thousand rows of data, so it is a big waste of resources.
Maybe someone who knows more about the internals of the DataReader can
answer this.
Since SQL Server doesn't support "LIMIT" like keywords, if there is a
better way to page data in sql server, I would like to know.
Thanks.
Provided you are using an incrementing number, like an Identity column, you
can store first and last and then use a DataSet and bind (avoid reader to
fill a DataTable and make the code a bit simpler). If you really like using a
Reader (MS does under the hood), it is not a problem.
Using the "pointers" you are storing, you can easily request data from that
point. If you like persisting the data rather than requesting each time, read
the DataTable as XML and pull the subset you want to paint on the screen.
Gregory A. Beamer
MVP; MCP: +I, SE, SD, DBA
***************************
Think Outside the Box!
***************************
"Natan Vivo" wrote:

> I had to built a custom data paging control for my asp.net app and SQL
> Server.
> This is what I'm doing:
> 1. do a SELECT to a SqlDataReader
> 2. create a DataTable with data from GetSchema()
> 3. loop trough the datareader using .Read() to the point I want to start
> 4. add data to DataTable with max of [pageSize] records
> 5. close the data reader and return
> The question is:
> Isn't looping reader.Read() doing nothing to go to start point where I
> want start getting data too expensive? Although DataReader is fast,
> won't ".Read()" get the row data from the server to the client? If so,
> if I need to get data from record 10000 ahead, it is actually reading
> all 10 thousand rows of data, so it is a big waste of resources.
> Maybe someone who knows more about the internals of the DataReader can
> answer this.
> Since SQL Server doesn't support "LIMIT" like keywords, if there is a
> better way to page data in sql server, I would like to know.
> Thanks.
>
|||You can think about following query:
SELECT TOP PAGE_SIZE * FROM TABLE_NAME WHERE KEY_ID > last_ID ORDER BY KEY_ID
HTH
Elton Wang
elton_wang@.hotmail.com
"Natan Vivo" wrote:

> I had to built a custom data paging control for my asp.net app and SQL
> Server.
> This is what I'm doing:
> 1. do a SELECT to a SqlDataReader
> 2. create a DataTable with data from GetSchema()
> 3. loop trough the datareader using .Read() to the point I want to start
> 4. add data to DataTable with max of [pageSize] records
> 5. close the data reader and return
> The question is:
> Isn't looping reader.Read() doing nothing to go to start point where I
> want start getting data too expensive? Although DataReader is fast,
> won't ".Read()" get the row data from the server to the client? If so,
> if I need to get data from record 10000 ahead, it is actually reading
> all 10 thousand rows of data, so it is a big waste of resources.
> Maybe someone who knows more about the internals of the DataReader can
> answer this.
> Since SQL Server doesn't support "LIMIT" like keywords, if there is a
> better way to page data in sql server, I would like to know.
> Thanks.
>
|||Take a look at:
http://www.aspfaq.com/show.asp?id=2120
Rick Sawtell
MCT, MCSD, MCDBA
|||Check out http://www.aspfaq.com/show.asp?id=2120 for various paging methods
Personally, I build parameterized dynamic SQL on the client side using the
query technique below. The primary key (or unique columns) are specified in
the WHERE and ORDER BY clauses.
use Northwind
--first page
SELECT TOP 10 OrderID, CustomerID, OrderDate
FROM Orders
ORDER BY OrderID ASC
--next page
SELECT TOP 10 OrderID, CustomerID, OrderDate
FROM Orders
WHERE OrderID > 10257 --last OrderID from currently displayed page
ORDER BY OrderID ASC
--prev page
SELECT OrderID, CustomerID, OrderDate
FROM
(
SELECT TOP 10 OrderID, CustomerID, OrderDate
FROM Orders
WHERE OrderID < 10258 --first OrderID from currently displayed page
ORDER BY OrderID DESC
) AS a
ORDER BY OrderID ASC
Hope this helps.
Dan Guzman
SQL Server MVP
"Natan Vivo" <nvivo@.terra.com.br> wrote in message
news:u8ZuMXznFHA.3256@.TK2MSFTNGP12.phx.gbl...
>I had to built a custom data paging control for my asp.net app and SQL
>Server.
> This is what I'm doing:
> 1. do a SELECT to a SqlDataReader
> 2. create a DataTable with data from GetSchema()
> 3. loop trough the datareader using .Read() to the point I want to start
> 4. add data to DataTable with max of [pageSize] records
> 5. close the data reader and return
> The question is:
> Isn't looping reader.Read() doing nothing to go to start point where I
> want start getting data too expensive? Although DataReader is fast, won't
> ".Read()" get the row data from the server to the client? If so, if I need
> to get data from record 10000 ahead, it is actually reading all 10
> thousand rows of data, so it is a big waste of resources.
> Maybe someone who knows more about the internals of the DataReader can
> answer this.
> Since SQL Server doesn't support "LIMIT" like keywords, if there is a
> better way to page data in sql server, I would like to know.
> Thanks.
|||You can use this
Dim oConnection As New SQLConnection("Provider...
oConnection.Open
Dim oDataAdapter As New SqlDataAdapter("SELECT ...", oConnection)
Dim oDataSet As DataSet = New DataSet
oDataAdapter.Fill(oDataSet, CurrentPageSize, PageSize, mstrTableName)
If you have 400 records and you need the first 100 records then
CurrentPageSize = 0 and PageSize = 100
If you need the next 100 records then CurrentPageSize = 100 and PageSize =
100
Good luck
Marc R.
"Natan Vivo" <nvivo@.terra.com.br> a crit dans le message de news:
u8ZuMXznFHA.3256@.TK2MSFTNGP12.phx.gbl...
>I had to built a custom data paging control for my asp.net app and SQL
>Server.
> This is what I'm doing:
> 1. do a SELECT to a SqlDataReader
> 2. create a DataTable with data from GetSchema()
> 3. loop trough the datareader using .Read() to the point I want to start
> 4. add data to DataTable with max of [pageSize] records
> 5. close the data reader and return
> The question is:
> Isn't looping reader.Read() doing nothing to go to start point where I
> want start getting data too expensive? Although DataReader is fast, won't
> ".Read()" get the row data from the server to the client? If so, if I need
> to get data from record 10000 ahead, it is actually reading all 10
> thousand rows of data, so it is a big waste of resources.
> Maybe someone who knows more about the internals of the DataReader can
> answer this.
> Since SQL Server doesn't support "LIMIT" like keywords, if there is a
> better way to page data in sql server, I would like to know.
> Thanks.
|||Cowboy (Gregory A. Beamer) - MVP wrote:
> Provided you are using an incrementing number, like an Identity column, you
No, I am not. That is why I needed to create my own pagination. There is
no way to determine what is beeing paginated neither the type of the
field in order by.. It may be text, may be number...
Does anyone know if the reader.Read() will actually read the data? if
yes, than probably there is no better way to do this. My inplementation
is pretty like using dataset, saving that i don't need to instantiate a
DataAdaper or neither a DataSet..
Thanks.

ADO.NET and SQL Server Data Paging

I had to built a custom data paging control for my asp.net app and SQL
Server.
This is what I'm doing:
1. do a SELECT to a SqlDataReader
2. create a DataTable with data from GetSchema()
3. loop trough the datareader using .Read() to the point I want to start
4. add data to DataTable with max of [pageSize] records
5. close the data reader and return
The question is:
Isn't looping reader.Read() doing nothing to go to start point where I
want start getting data too expensive? Although DataReader is fast,
won't ".Read()" get the row data from the server to the client? If so,
if I need to get data from record 10000 ahead, it is actually reading
all 10 thousand rows of data, so it is a big waste of resources.
Maybe someone who knows more about the internals of the DataReader can
answer this.
Since SQL Server doesn't support "LIMIT" like keywords, if there is a
better way to page data in sql server, I would like to know.
Thanks.Provided you are using an incrementing number, like an Identity column, you
can store first and last and then use a DataSet and bind (avoid reader to
fill a DataTable and make the code a bit simpler). If you really like using a
Reader (MS does under the hood), it is not a problem.
Using the "pointers" you are storing, you can easily request data from that
point. If you like persisting the data rather than requesting each time, read
the DataTable as XML and pull the subset you want to paint on the screen.
--
Gregory A. Beamer
MVP; MCP: +I, SE, SD, DBA
***************************
Think Outside the Box!
***************************
"Natan Vivo" wrote:
> I had to built a custom data paging control for my asp.net app and SQL
> Server.
> This is what I'm doing:
> 1. do a SELECT to a SqlDataReader
> 2. create a DataTable with data from GetSchema()
> 3. loop trough the datareader using .Read() to the point I want to start
> 4. add data to DataTable with max of [pageSize] records
> 5. close the data reader and return
> The question is:
> Isn't looping reader.Read() doing nothing to go to start point where I
> want start getting data too expensive? Although DataReader is fast,
> won't ".Read()" get the row data from the server to the client? If so,
> if I need to get data from record 10000 ahead, it is actually reading
> all 10 thousand rows of data, so it is a big waste of resources.
> Maybe someone who knows more about the internals of the DataReader can
> answer this.
> Since SQL Server doesn't support "LIMIT" like keywords, if there is a
> better way to page data in sql server, I would like to know.
> Thanks.
>|||You can think about following query:
SELECT TOP PAGE_SIZE * FROM TABLE_NAME WHERE KEY_ID > last_ID ORDER BY KEY_ID
HTH
Elton Wang
elton_wang@.hotmail.com
"Natan Vivo" wrote:
> I had to built a custom data paging control for my asp.net app and SQL
> Server.
> This is what I'm doing:
> 1. do a SELECT to a SqlDataReader
> 2. create a DataTable with data from GetSchema()
> 3. loop trough the datareader using .Read() to the point I want to start
> 4. add data to DataTable with max of [pageSize] records
> 5. close the data reader and return
> The question is:
> Isn't looping reader.Read() doing nothing to go to start point where I
> want start getting data too expensive? Although DataReader is fast,
> won't ".Read()" get the row data from the server to the client? If so,
> if I need to get data from record 10000 ahead, it is actually reading
> all 10 thousand rows of data, so it is a big waste of resources.
> Maybe someone who knows more about the internals of the DataReader can
> answer this.
> Since SQL Server doesn't support "LIMIT" like keywords, if there is a
> better way to page data in sql server, I would like to know.
> Thanks.
>|||Take a look at:
http://www.aspfaq.com/show.asp?id=2120
Rick Sawtell
MCT, MCSD, MCDBA|||Check out http://www.aspfaq.com/show.asp?id=2120 for various paging methods
Personally, I build parameterized dynamic SQL on the client side using the
query technique below. The primary key (or unique columns) are specified in
the WHERE and ORDER BY clauses.
use Northwind
--first page
SELECT TOP 10 OrderID, CustomerID, OrderDate
FROM Orders
ORDER BY OrderID ASC
--next page
SELECT TOP 10 OrderID, CustomerID, OrderDate
FROM Orders
WHERE OrderID > 10257 --last OrderID from currently displayed page
ORDER BY OrderID ASC
--prev page
SELECT OrderID, CustomerID, OrderDate
FROM
(
SELECT TOP 10 OrderID, CustomerID, OrderDate
FROM Orders
WHERE OrderID < 10258 --first OrderID from currently displayed page
ORDER BY OrderID DESC
) AS a
ORDER BY OrderID ASC
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Natan Vivo" <nvivo@.terra.com.br> wrote in message
news:u8ZuMXznFHA.3256@.TK2MSFTNGP12.phx.gbl...
>I had to built a custom data paging control for my asp.net app and SQL
>Server.
> This is what I'm doing:
> 1. do a SELECT to a SqlDataReader
> 2. create a DataTable with data from GetSchema()
> 3. loop trough the datareader using .Read() to the point I want to start
> 4. add data to DataTable with max of [pageSize] records
> 5. close the data reader and return
> The question is:
> Isn't looping reader.Read() doing nothing to go to start point where I
> want start getting data too expensive? Although DataReader is fast, won't
> ".Read()" get the row data from the server to the client? If so, if I need
> to get data from record 10000 ahead, it is actually reading all 10
> thousand rows of data, so it is a big waste of resources.
> Maybe someone who knows more about the internals of the DataReader can
> answer this.
> Since SQL Server doesn't support "LIMIT" like keywords, if there is a
> better way to page data in sql server, I would like to know.
> Thanks.|||You can use this
Dim oConnection As New SQLConnection("Provider...
oConnection.Open
Dim oDataAdapter As New SqlDataAdapter("SELECT ...", oConnection)
Dim oDataSet As DataSet = New DataSet
oDataAdapter.Fill(oDataSet, CurrentPageSize, PageSize, mstrTableName)
If you have 400 records and you need the first 100 records then
CurrentPageSize = 0 and PageSize = 100
If you need the next 100 records then CurrentPageSize = 100 and PageSize =100
Good luck
Marc R.
"Natan Vivo" <nvivo@.terra.com.br> a écrit dans le message de news:
u8ZuMXznFHA.3256@.TK2MSFTNGP12.phx.gbl...
>I had to built a custom data paging control for my asp.net app and SQL
>Server.
> This is what I'm doing:
> 1. do a SELECT to a SqlDataReader
> 2. create a DataTable with data from GetSchema()
> 3. loop trough the datareader using .Read() to the point I want to start
> 4. add data to DataTable with max of [pageSize] records
> 5. close the data reader and return
> The question is:
> Isn't looping reader.Read() doing nothing to go to start point where I
> want start getting data too expensive? Although DataReader is fast, won't
> ".Read()" get the row data from the server to the client? If so, if I need
> to get data from record 10000 ahead, it is actually reading all 10
> thousand rows of data, so it is a big waste of resources.
> Maybe someone who knows more about the internals of the DataReader can
> answer this.
> Since SQL Server doesn't support "LIMIT" like keywords, if there is a
> better way to page data in sql server, I would like to know.
> Thanks.|||Cowboy (Gregory A. Beamer) - MVP wrote:
> Provided you are using an incrementing number, like an Identity column, you
No, I am not. That is why I needed to create my own pagination. There is
no way to determine what is beeing paginated neither the type of the
field in order by.. It may be text, may be number...
Does anyone know if the reader.Read() will actually read the data? if
yes, than probably there is no better way to do this. My inplementation
is pretty like using dataset, saving that i don't need to instantiate a
DataAdaper or neither a DataSet..
Thanks.

ADO.NET and SQL Server Data Paging

I had to built a custom data paging control for my asp.net app and SQL
Server.
This is what I'm doing:
1. do a SELECT to a SqlDataReader
2. create a DataTable with data from GetSchema()
3. loop trough the datareader using .Read() to the point I want to start
4. add data to DataTable with max of [pageSize] records
5. close the data reader and return
The question is:
Isn't looping reader.Read() doing nothing to go to start point where I
want start getting data too expensive? Although DataReader is fast,
won't ".Read()" get the row data from the server to the client? If so,
if I need to get data from record 10000 ahead, it is actually reading
all 10 thousand rows of data, so it is a big waste of resources.
Maybe someone who knows more about the internals of the DataReader can
answer this.
Since SQL Server doesn't support "LIMIT" like keywords, if there is a
better way to page data in sql server, I would like to know.
Thanks.Provided you are using an incrementing number, like an Identity column, you
can store first and last and then use a DataSet and bind (avoid reader to
fill a DataTable and make the code a bit simpler). If you really like using
a
Reader (MS does under the hood), it is not a problem.
Using the "pointers" you are storing, you can easily request data from that
point. If you like persisting the data rather than requesting each time, rea
d
the DataTable as XML and pull the subset you want to paint on the screen.
Gregory A. Beamer
MVP; MCP: +I, SE, SD, DBA
***************************
Think Outside the Box!
***************************
"Natan Vivo" wrote:

> I had to built a custom data paging control for my asp.net app and SQL
> Server.
> This is what I'm doing:
> 1. do a SELECT to a SqlDataReader
> 2. create a DataTable with data from GetSchema()
> 3. loop trough the datareader using .Read() to the point I want to start
> 4. add data to DataTable with max of [pageSize] records
> 5. close the data reader and return
> The question is:
> Isn't looping reader.Read() doing nothing to go to start point where I
> want start getting data too expensive? Although DataReader is fast,
> won't ".Read()" get the row data from the server to the client? If so,
> if I need to get data from record 10000 ahead, it is actually reading
> all 10 thousand rows of data, so it is a big waste of resources.
> Maybe someone who knows more about the internals of the DataReader can
> answer this.
> Since SQL Server doesn't support "LIMIT" like keywords, if there is a
> better way to page data in sql server, I would like to know.
> Thanks.
>|||You can think about following query:
SELECT TOP PAGE_SIZE * FROM TABLE_NAME WHERE KEY_ID > last_ID ORDER BY KEY_
ID
HTH
Elton Wang
elton_wang@.hotmail.com
"Natan Vivo" wrote:

> I had to built a custom data paging control for my asp.net app and SQL
> Server.
> This is what I'm doing:
> 1. do a SELECT to a SqlDataReader
> 2. create a DataTable with data from GetSchema()
> 3. loop trough the datareader using .Read() to the point I want to start
> 4. add data to DataTable with max of [pageSize] records
> 5. close the data reader and return
> The question is:
> Isn't looping reader.Read() doing nothing to go to start point where I
> want start getting data too expensive? Although DataReader is fast,
> won't ".Read()" get the row data from the server to the client? If so,
> if I need to get data from record 10000 ahead, it is actually reading
> all 10 thousand rows of data, so it is a big waste of resources.
> Maybe someone who knows more about the internals of the DataReader can
> answer this.
> Since SQL Server doesn't support "LIMIT" like keywords, if there is a
> better way to page data in sql server, I would like to know.
> Thanks.
>|||Take a look at:
http://www.aspfaq.com/show.asp?id=2120
Rick Sawtell
MCT, MCSD, MCDBA|||Check out http://www.aspfaq.com/show.asp?id=2120 for various paging methods
Personally, I build parameterized dynamic SQL on the client side using the
query technique below. The primary key (or unique columns) are specified in
the WHERE and ORDER BY clauses.
use Northwind
--first page
SELECT TOP 10 OrderID, CustomerID, OrderDate
FROM Orders
ORDER BY OrderID ASC
--next page
SELECT TOP 10 OrderID, CustomerID, OrderDate
FROM Orders
WHERE OrderID > 10257 --last OrderID from currently displayed page
ORDER BY OrderID ASC
--prev page
SELECT OrderID, CustomerID, OrderDate
FROM
(
SELECT TOP 10 OrderID, CustomerID, OrderDate
FROM Orders
WHERE OrderID < 10258 --first OrderID from currently displayed page
ORDER BY OrderID DESC
) AS a
ORDER BY OrderID ASC
Hope this helps.
Dan Guzman
SQL Server MVP
"Natan Vivo" <nvivo@.terra.com.br> wrote in message
news:u8ZuMXznFHA.3256@.TK2MSFTNGP12.phx.gbl...
>I had to built a custom data paging control for my asp.net app and SQL
>Server.
> This is what I'm doing:
> 1. do a SELECT to a SqlDataReader
> 2. create a DataTable with data from GetSchema()
> 3. loop trough the datareader using .Read() to the point I want to start
> 4. add data to DataTable with max of [pageSize] records
> 5. close the data reader and return
> The question is:
> Isn't looping reader.Read() doing nothing to go to start point where I
> want start getting data too expensive? Although DataReader is fast, won't
> ".Read()" get the row data from the server to the client? If so, if I need
> to get data from record 10000 ahead, it is actually reading all 10
> thousand rows of data, so it is a big waste of resources.
> Maybe someone who knows more about the internals of the DataReader can
> answer this.
> Since SQL Server doesn't support "LIMIT" like keywords, if there is a
> better way to page data in sql server, I would like to know.
> Thanks.|||You can use this
Dim oConnection As New SQLConnection("Provider...
oConnection.Open
Dim oDataAdapter As New SqlDataAdapter("SELECT ...", oConnection)
Dim oDataSet As DataSet = New DataSet
oDataAdapter.Fill(oDataSet, CurrentPageSize, PageSize, mstrTableName)
If you have 400 records and you need the first 100 records then
CurrentPageSize = 0 and PageSize = 100
If you need the next 100 records then CurrentPageSize = 100 and PageSize =
100
Good luck
Marc R.
"Natan Vivo" <nvivo@.terra.com.br> a crit dans le message de news:
u8ZuMXznFHA.3256@.TK2MSFTNGP12.phx.gbl...
>I had to built a custom data paging control for my asp.net app and SQL
>Server.
> This is what I'm doing:
> 1. do a SELECT to a SqlDataReader
> 2. create a DataTable with data from GetSchema()
> 3. loop trough the datareader using .Read() to the point I want to start
> 4. add data to DataTable with max of [pageSize] records
> 5. close the data reader and return
> The question is:
> Isn't looping reader.Read() doing nothing to go to start point where I
> want start getting data too expensive? Although DataReader is fast, won't
> ".Read()" get the row data from the server to the client? If so, if I need
> to get data from record 10000 ahead, it is actually reading all 10
> thousand rows of data, so it is a big waste of resources.
> Maybe someone who knows more about the internals of the DataReader can
> answer this.
> Since SQL Server doesn't support "LIMIT" like keywords, if there is a
> better way to page data in sql server, I would like to know.
> Thanks.|||Cowboy (Gregory A. Beamer) - MVP wrote:[vbcol=seagreen]
> Provided you are using an incrementing number, like an Identity column, you[/vbcol
]
No, I am not. That is why I needed to create my own pagination. There is
no way to determine what is beeing paginated neither the type of the
field in order by.. It may be text, may be number...
Does anyone know if the reader.Read() will actually read the data? if
yes, than probably there is no better way to do this. My inplementation
is pretty like using dataset, saving that i don't need to instantiate a
DataAdaper or neither a DataSet..
Thanks.

ADO.net (how to close a reader)

Hello,
I'm trying to run the following snippet as Console app, and I get an error:
"a reader is already open for this connection, and should be closed first"
I tried to manually say:
SqlDataReader.Close() in the begining of the code, but still get the error, Any suggecstions how to manually close the reader?
thank you
---- here's the code ----
using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;

namespace ADO.NET
{
/// <summary>
/// Summary description for Class1.
/// </summary>
class Class1
{

/// <summary>
/// The main entry point for the application.
/// </summary>
[STAThread]
static void Main(string[] args)
{

SqlConnection cn = new SqlConnection
("SERVER=MyServer; INTEGRATED SECURITY=TRUE;" +
"DATABASE=AdventureWorks");
SqlCommand cmd1 = new SqlCommand ("Select * from HumanResources.Department", cn);
cmd1.CommandType = CommandType.Text;
try
{
cn.Open();
SqlDataReader rdr = cmd1.ExecuteReader();
while (rdr.Read())
{
if (rdr["Name"].ToString() == "Production")
{
SqlCommand cmd2 = new SqlCommand("SELECT * FROM "
+ "HumanResources.Employee WHERE DepartmentID = 7", cn);
cmd2.CommandType = CommandType.Text;
SqlDataReader rdr2 = cmd2.ExecuteReader();
while (rdr2.Read())
{
}
rdr2.Close();
}
}
rdr.Close();
}
catch (Exception ex)
{
Console.WriteLine
(ex.Message);
}
finally
{
cn.Close();
}
}
}
}

Only one reader at a time can be active on a single connection. Create a second connection and all will be well.

This is not a real problem, since connection pooling will make efficient use of connections.

|||thanks!

Sunday, February 19, 2012

ADO initialization fails on W2003

OVERVIEW:
ADO initialization fails on W2003 Server, Standard Edition.
This is a new system.
This is the first time this app has been compiled or run on this system.
The app runs fine on all other servers I've tried it on.
What should I check?
Is anyone else able to run ADO routines against OLE 5.2?
Thanks,
Brad.
DETAILS:
OLE32.dll v.5.2.3790.250
OLEAUT32.dll v.5.2.3790.0
Tracing into the code, I get into
_WStrAsg("", "'ISAUTOINCREMENT'");
which calls SysReAllocStringLen. This fails with an access violation.
SysReAllocStringLen is a call into OLEAUT32.dll.
All the OLE32 dll look to be updated. They are newer than anything
on any of my XP systems.
I have a new test project that uses ADODB and it *doesn't* fail.
Yes, it does go through the same code just as nice as you could want.
The same app is deployed an running on dozens of other systems.
None are as new as this one.
So my guess is that it has to be something about the way the new
system is set up.
My next step is to start tearing down a copy of my project. But I'm
not terribly hopeful.
Thanks,
Brad.Long shot but is the bad project compiled on the same library as the working
one. Also, have you updated to the latest mdac + fixes.
-oj
"Brad White" <bwhite at inebraska . com> wrote in message
news:OQLP18vDFHA.2540@.TK2MSFTNGP09.phx.gbl...
> OVERVIEW:
> ADO initialization fails on W2003 Server, Standard Edition.
> This is a new system.
> This is the first time this app has been compiled or run on this system.
> The app runs fine on all other servers I've tried it on.
> What should I check?
> Is anyone else able to run ADO routines against OLE 5.2?
> Thanks,
> Brad.
> DETAILS:
> OLE32.dll v.5.2.3790.250
> OLEAUT32.dll v.5.2.3790.0
> Tracing into the code, I get into
> _WStrAsg("", "'ISAUTOINCREMENT'");
> which calls SysReAllocStringLen. This fails with an access violation.
> SysReAllocStringLen is a call into OLEAUT32.dll.
> All the OLE32 dll look to be updated. They are newer than anything
> on any of my XP systems.
> I have a new test project that uses ADODB and it *doesn't* fail.
> Yes, it does go through the same code just as nice as you could want.
> The same app is deployed an running on dozens of other systems.
> None are as new as this one.
> So my guess is that it has to be something about the way the new
> system is set up.
> My next step is to start tearing down a copy of my project. But I'm
> not terribly hopeful.
> --
> Thanks,
> Brad.
>
>
>|||Thanks for the reply.
The first thing I checked was MDAC, and yes it is 2.8.
I thought last night that perhaps it is a 2003 thing,
but no, we have had installations for months on 2003 Server
with no problems.
Same libraries. Tried the debug versions of the libraries in both exe's
with same results.
"oj" <nospam_ojngo@.home.com> wrote in message
news:uMArKKwDFHA.2032@.tk2msftngp13.phx.gbl...
> Long shot but is the bad project compiled on the same library as the
working
> one. Also, have you updated to the latest mdac + fixes.
> --
> -oj
>
> "Brad White" <bwhite at inebraska . com> wrote in message
> news:OQLP18vDFHA.2540@.TK2MSFTNGP09.phx.gbl...
>|||Brad,
Perhaps, reinstall the package?
-oj
"Brad White" <bwhite at inebraska . com> wrote in message
news:%23LzC%23$3DFHA.3924@.TK2MSFTNGP09.phx.gbl...
> Thanks for the reply.
> The first thing I checked was MDAC, and yes it is 2.8.
> I thought last night that perhaps it is a 2003 thing,
> but no, we have had installations for months on 2003 Server
> with no problems.
> Same libraries. Tried the debug versions of the libraries in both exe's
> with same results.
> --
> "oj" <nospam_ojngo@.home.com> wrote in message
> news:uMArKKwDFHA.2032@.tk2msftngp13.phx.gbl...
> working
>|||"oj" <nospam_ojngo@.home.com> wrote in message
news:u34aoVAEFHA.3732@.TK2MSFTNGP14.phx.gbl...
> Brad,
> Perhaps, reinstall the package?
>
Thanks for the reply.
The way it was acting reminded me of memory corruption,
but I couldn't see how that could be since no custom code
had run yet.
Turns out it was an unrelated dll improperly installed.
Reinstalling that to solve another problem, solved this problem.
It wasn't an ADO thing or WideStr thing at all.
--
Brad White