Showing posts with label built. Show all posts
Showing posts with label built. Show all posts

Tuesday, March 27, 2012

advice on hosting website safely

Hello,

For one of my clients I have built 2 applications. The 1st one is a windows based application that is used for production and employee scheduling. The data for this application is stored in a SQL Express database. The 2nd one is a ASP.NET 2.0 site that contains the clients website and a portal page. The portal page uses forms authentication and redirects a customer to a page that shows the production schedule for this customer. So I have 2 kind of users: local (LAN) and web. Both using the same database.

For test purposes the site is currently hosted on the clients SBS 2003 server (this is the only server). However I don't think it is wise to host the public website on this server. What is the best configuration to host the public website safely? I read about putting a webserver in a DMZ. But I need a connection with the SQL Express server to retrieve the data for the portal section. As well for authenticating the customer and for retrieving the schedule data.

I'm currently using Windows Authentication on the SQL server. Can this still be used when the webserver is in the DMZ?

If I understand it correctly the webserver in the DMZ is not part of the local domain? Also could remote hosting be an option at all?

Any help and advice would be highly appreciated.

Remote hosting would be the 'safest' option. If the box is hacked, the hacker is NOT in your network. Siting in a DMZ 'could' be relatively safe -if you are certain that there are no 'holes' in the firewall.

The public website (application) may be best using Application Pooling security on the IIS box. You can connect to a SQL Server using an IP address instead of a serverName. Use the IP address and the Port in the connection string.

Since a server in a DMZ is not part of the local domain, you cannot use Windows Authentication on the SQL Server.

I suggest that if you were to visit the forums and blogs at www.ASP.NET, you will find folks with a lot of experience with this particular issue.

advice on hard drive set up for 2005 sql server 64 bit

Hi
I built a server and I need some advice on reconfiguring the hard drives for
better sql server performance. Here are my current specs
2 Xeon 5160 Dual Core Hypertheaded Processors
8 gig of fd ram
Windows 2003 Server 64 bit
Sql Server 2005 64 bit
2 500 gig Sata II hard drives set up as Raid 1 at the hard ware level. They
are partitioned as 15 gig for the OS and the rest for my development
environment.
In my development testing I am working with databases starting at 30 gig and
can run upwards of 100 gig. After initial set up and the first round of
testing I quickly learned that I needed to move my tempDB onto the larger
partition. I also learned that I should move it on to a fast disk that isn't
part of the raid because it isn't necessary to backup. Additionally I read
that there is no need to put the system swap file on the raid and for better
performance it should be on a separate faster disk.
This is how I am thinking of reconfiguring the system
2 36 gig 10,000 rpm SATA I drives mirrored under raid 1 used solely for the
OS
1 174 gig 10,000 rpm SATA I drive used for the tempDB & system swap space
and the remaining 2 500 gig drives again set up on raid 1 for my development
database & environments.
does that sound better than my original set up ? if not I am open to
suggestions.
Also if I did move my OS to the new 36 gig would I need to do a complete
reinstall or could I do a back up and restore? I think it would be possible
using partition magic or a similiar tool.
thanks alot
Sounds improved. You may also want to put tempdb on two filegroups, one on
36GB and one on 174GB drives. I think the optimum for 2005 is 1 FG per CPU
(not counting hyperthreads)
TheSQLGuru
President
Indicium Resources, Inc.
"Coaster" <Coaster@.Coaster.net> wrote in message
news:OKqVNMxpHHA.960@.TK2MSFTNGP03.phx.gbl...
> Hi
> I built a server and I need some advice on reconfiguring the hard drives
> for better sql server performance. Here are my current specs
> 2 Xeon 5160 Dual Core Hypertheaded Processors
> 8 gig of fd ram
> Windows 2003 Server 64 bit
> Sql Server 2005 64 bit
> 2 500 gig Sata II hard drives set up as Raid 1 at the hard ware level.
> They are partitioned as 15 gig for the OS and the rest for my development
> environment.
> In my development testing I am working with databases starting at 30 gig
> and can run upwards of 100 gig. After initial set up and the first round
> of testing I quickly learned that I needed to move my tempDB onto the
> larger partition. I also learned that I should move it on to a fast disk
> that isn't part of the raid because it isn't necessary to backup.
> Additionally I read that there is no need to put the system swap file on
> the raid and for better performance it should be on a separate faster
> disk.
> This is how I am thinking of reconfiguring the system
> 2 36 gig 10,000 rpm SATA I drives mirrored under raid 1 used solely for
> the OS
> 1 174 gig 10,000 rpm SATA I drive used for the tempDB & system swap space
> and the remaining 2 500 gig drives again set up on raid 1 for my
> development database & environments.
> does that sound better than my original set up ? if not I am open to
> suggestions.
> Also if I did move my OS to the new 36 gig would I need to do a complete
> reinstall or could I do a back up and restore? I think it would be
> possible using partition magic or a similiar tool.
> thanks alot
>
|||I can guess that would be true, each cpu having it's own tempdb space to
use.
Thanks I'll do that
"TheSQLGuru" <kgboles@.earthlink.net> wrote in message
news:uC3biexpHHA.1144@.TK2MSFTNGP02.phx.gbl...
> Sounds improved. You may also want to put tempdb on two filegroups, one
> on 36GB and one on 174GB drives. I think the optimum for 2005 is 1 FG per
> CPU (not counting hyperthreads)
> --
> TheSQLGuru
> President
> Indicium Resources, Inc.
> "Coaster" <Coaster@.Coaster.net> wrote in message
> news:OKqVNMxpHHA.960@.TK2MSFTNGP03.phx.gbl...
>

advice on hard drive set up for 2005 sql server 64 bit

Hi
I built a server and I need some advice on reconfiguring the hard drives for
better sql server performance. Here are my current specs
2 Xeon 5160 Dual Core Hypertheaded Processors
8 gig of fd ram
Windows 2003 Server 64 bit
Sql Server 2005 64 bit
2 500 gig Sata II hard drives set up as Raid 1 at the hard ware level. They
are partitioned as 15 gig for the OS and the rest for my development
environment.
In my development testing I am working with databases starting at 30 gig and
can run upwards of 100 gig. After initial set up and the first round of
testing I quickly learned that I needed to move my tempDB onto the larger
partition. I also learned that I should move it on to a fast disk that isn't
part of the raid because it isn't necessary to backup. Additionally I read
that there is no need to put the system swap file on the raid and for better
performance it should be on a separate faster disk.
This is how I am thinking of reconfiguring the system
2 36 gig 10,000 rpm SATA I drives mirrored under raid 1 used solely for the
OS
1 174 gig 10,000 rpm SATA I drive used for the tempDB & system swap space
and the remaining 2 500 gig drives again set up on raid 1 for my development
database & environments.
does that sound better than my original set up ? if not I am open to
suggestions.
Also if I did move my OS to the new 36 gig would I need to do a complete
reinstall or could I do a back up and restore? I think it would be possible
using partition magic or a similiar tool.
thanks alotSounds improved. You may also want to put tempdb on two filegroups, one on
36GB and one on 174GB drives. I think the optimum for 2005 is 1 FG per CPU
(not counting hyperthreads)
TheSQLGuru
President
Indicium Resources, Inc.
"Coaster" <Coaster@.Coaster.net> wrote in message
news:OKqVNMxpHHA.960@.TK2MSFTNGP03.phx.gbl...
> Hi
> I built a server and I need some advice on reconfiguring the hard drives
> for better sql server performance. Here are my current specs
> 2 Xeon 5160 Dual Core Hypertheaded Processors
> 8 gig of fd ram
> Windows 2003 Server 64 bit
> Sql Server 2005 64 bit
> 2 500 gig Sata II hard drives set up as Raid 1 at the hard ware level.
> They are partitioned as 15 gig for the OS and the rest for my development
> environment.
> In my development testing I am working with databases starting at 30 gig
> and can run upwards of 100 gig. After initial set up and the first round
> of testing I quickly learned that I needed to move my tempDB onto the
> larger partition. I also learned that I should move it on to a fast disk
> that isn't part of the raid because it isn't necessary to backup.
> Additionally I read that there is no need to put the system swap file on
> the raid and for better performance it should be on a separate faster
> disk.
> This is how I am thinking of reconfiguring the system
> 2 36 gig 10,000 rpm SATA I drives mirrored under raid 1 used solely for
> the OS
> 1 174 gig 10,000 rpm SATA I drive used for the tempDB & system swap space
> and the remaining 2 500 gig drives again set up on raid 1 for my
> development database & environments.
> does that sound better than my original set up ? if not I am open to
> suggestions.
> Also if I did move my OS to the new 36 gig would I need to do a complete
> reinstall or could I do a back up and restore? I think it would be
> possible using partition magic or a similiar tool.
> thanks alot
>|||I can guess that would be true, each cpu having it's own tempdb space to
use.
Thanks I'll do that
"TheSQLGuru" <kgboles@.earthlink.net> wrote in message
news:uC3biexpHHA.1144@.TK2MSFTNGP02.phx.gbl...
> Sounds improved. You may also want to put tempdb on two filegroups, one
> on 36GB and one on 174GB drives. I think the optimum for 2005 is 1 FG per
> CPU (not counting hyperthreads)
> --
> TheSQLGuru
> President
> Indicium Resources, Inc.
> "Coaster" <Coaster@.Coaster.net> wrote in message
> news:OKqVNMxpHHA.960@.TK2MSFTNGP03.phx.gbl...
>sql

advice on hard drive set up for 2005 sql server 64 bit

Hi
I built a server and I need some advice on reconfiguring the hard drives for
better sql server performance. Here are my current specs
2 Xeon 5160 Dual Core Hypertheaded Processors
8 gig of fd ram
Windows 2003 Server 64 bit
Sql Server 2005 64 bit
2 500 gig Sata II hard drives set up as Raid 1 at the hard ware level. They
are partitioned as 15 gig for the OS and the rest for my development
environment.
In my development testing I am working with databases starting at 30 gig and
can run upwards of 100 gig. After initial set up and the first round of
testing I quickly learned that I needed to move my tempDB onto the larger
partition. I also learned that I should move it on to a fast disk that isn't
part of the raid because it isn't necessary to backup. Additionally I read
that there is no need to put the system swap file on the raid and for better
performance it should be on a separate faster disk.
This is how I am thinking of reconfiguring the system
2 36 gig 10,000 rpm SATA I drives mirrored under raid 1 used solely for the
OS
1 174 gig 10,000 rpm SATA I drive used for the tempDB & system swap space
and the remaining 2 500 gig drives again set up on raid 1 for my development
database & environments.
does that sound better than my original set up ? if not I am open to
suggestions.
Also if I did move my OS to the new 36 gig would I need to do a complete
reinstall or could I do a back up and restore? I think it would be possible
using partition magic or a similiar tool.
thanks alotSounds improved. You may also want to put tempdb on two filegroups, one on
36GB and one on 174GB drives. I think the optimum for 2005 is 1 FG per CPU
(not counting hyperthreads)
--
TheSQLGuru
President
Indicium Resources, Inc.
"Coaster" <Coaster@.Coaster.net> wrote in message
news:OKqVNMxpHHA.960@.TK2MSFTNGP03.phx.gbl...
> Hi
> I built a server and I need some advice on reconfiguring the hard drives
> for better sql server performance. Here are my current specs
> 2 Xeon 5160 Dual Core Hypertheaded Processors
> 8 gig of fd ram
> Windows 2003 Server 64 bit
> Sql Server 2005 64 bit
> 2 500 gig Sata II hard drives set up as Raid 1 at the hard ware level.
> They are partitioned as 15 gig for the OS and the rest for my development
> environment.
> In my development testing I am working with databases starting at 30 gig
> and can run upwards of 100 gig. After initial set up and the first round
> of testing I quickly learned that I needed to move my tempDB onto the
> larger partition. I also learned that I should move it on to a fast disk
> that isn't part of the raid because it isn't necessary to backup.
> Additionally I read that there is no need to put the system swap file on
> the raid and for better performance it should be on a separate faster
> disk.
> This is how I am thinking of reconfiguring the system
> 2 36 gig 10,000 rpm SATA I drives mirrored under raid 1 used solely for
> the OS
> 1 174 gig 10,000 rpm SATA I drive used for the tempDB & system swap space
> and the remaining 2 500 gig drives again set up on raid 1 for my
> development database & environments.
> does that sound better than my original set up ? if not I am open to
> suggestions.
> Also if I did move my OS to the new 36 gig would I need to do a complete
> reinstall or could I do a back up and restore? I think it would be
> possible using partition magic or a similiar tool.
> thanks alot
>|||I can guess that would be true, each cpu having it's own tempdb space to
use.
Thanks I'll do that
"TheSQLGuru" <kgboles@.earthlink.net> wrote in message
news:uC3biexpHHA.1144@.TK2MSFTNGP02.phx.gbl...
> Sounds improved. You may also want to put tempdb on two filegroups, one
> on 36GB and one on 174GB drives. I think the optimum for 2005 is 1 FG per
> CPU (not counting hyperthreads)
> --
> TheSQLGuru
> President
> Indicium Resources, Inc.
> "Coaster" <Coaster@.Coaster.net> wrote in message
> news:OKqVNMxpHHA.960@.TK2MSFTNGP03.phx.gbl...
>> Hi
>> I built a server and I need some advice on reconfiguring the hard drives
>> for better sql server performance. Here are my current specs
>> 2 Xeon 5160 Dual Core Hypertheaded Processors
>> 8 gig of fd ram
>> Windows 2003 Server 64 bit
>> Sql Server 2005 64 bit
>> 2 500 gig Sata II hard drives set up as Raid 1 at the hard ware level.
>> They are partitioned as 15 gig for the OS and the rest for my development
>> environment.
>> In my development testing I am working with databases starting at 30 gig
>> and can run upwards of 100 gig. After initial set up and the first round
>> of testing I quickly learned that I needed to move my tempDB onto the
>> larger partition. I also learned that I should move it on to a fast disk
>> that isn't part of the raid because it isn't necessary to backup.
>> Additionally I read that there is no need to put the system swap file on
>> the raid and for better performance it should be on a separate faster
>> disk.
>> This is how I am thinking of reconfiguring the system
>> 2 36 gig 10,000 rpm SATA I drives mirrored under raid 1 used solely for
>> the OS
>> 1 174 gig 10,000 rpm SATA I drive used for the tempDB & system swap
>> space
>> and the remaining 2 500 gig drives again set up on raid 1 for my
>> development database & environments.
>> does that sound better than my original set up ? if not I am open to
>> suggestions.
>> Also if I did move my OS to the new 36 gig would I need to do a complete
>> reinstall or could I do a back up and restore? I think it would be
>> possible using partition magic or a similiar tool.
>> thanks alot
>

Advice on database volume

Hi all,
I'm new to datawarehousing. I've built a DW with two fact tables, each one
will be added with more than 5 to 10 millions rows each day. Do you think
that this volume is compatible with Analysis Services 2005 or do I have to
change the model I use, by creating a fact table for each customer I have
(for instance).
Second question: if the awnser of the previous question is that I can keep
the model I've built, will I have to investigate cube partition ?
thank's.
Jerome
You can work with this volume of data in Analysis Services 2005 without
changing your schema (and I strongly suggest you to not change your
schema!).
You can use partitions to handle this volume of data without having a
single big partition that is hard to process if for whatever reasons
you need to full process your cube.
Be careful with aggregation design - you really need preaggregations on
this volume of data.
Marco Russo
http://www.sqlbi.eu
http://www.sqljunkies.com/weblog/sqlbi
Jerome wrote:
> Hi all,
> I'm new to datawarehousing. I've built a DW with two fact tables, each one
> will be added with more than 5 to 10 millions rows each day. Do you think
> that this volume is compatible with Analysis Services 2005 or do I have to
> change the model I use, by creating a fact table for each customer I have
> (for instance).
> Second question: if the awnser of the previous question is that I can keep
> the model I've built, will I have to investigate cube partition ?
> thank's.
> Jerome
|||Hi Marco,
What do you mean exactly by "aggregation design"?
What is exactly preaggregation? Do you know some references on the web?
Thanks
Jerome
"Marco Russo" wrote:

> You can work with this volume of data in Analysis Services 2005 without
> changing your schema (and I strongly suggest you to not change your
> schema!).
> You can use partitions to handle this volume of data without having a
> single big partition that is hard to process if for whatever reasons
> you need to full process your cube.
> Be careful with aggregation design - you really need preaggregations on
> this volume of data.
> Marco Russo
> http://www.sqlbi.eu
> http://www.sqljunkies.com/weblog/sqlbi
>
> Jerome wrote:
>
|||yeah aggregation % in Analysis Services
-Susie
Jerome wrote:[vbcol=seagreen]
> Hi Marco,
> What do you mean exactly by "aggregation design"?
> What is exactly preaggregation? Do you know some references on the web?
> Thanks
> Jerome
>
> "Marco Russo" wrote:
|||yeah % aggregation in analysis services
Jerome wrote:[vbcol=seagreen]
> Hi Marco,
> What do you mean exactly by "aggregation design"?
> What is exactly preaggregation? Do you know some references on the web?
> Thanks
> Jerome
>
> "Marco Russo" wrote:

Advice on database volume

Hi all,
I'm new to datawarehousing. I've built a DW with two fact tables, each one
will be added with more than 5 to 10 millions rows each day. Do you think
that this volume is compatible with Analysis Services 2005 or do I have to
change the model I use, by creating a fact table for each customer I have
(for instance).
Second question: if the awnser of the previous question is that I can keep
the model I've built, will I have to investigate cube partition ?
thank's.
JeromeYou can work with this volume of data in Analysis Services 2005 without
changing your schema (and I strongly suggest you to not change your
schema!).
You can use partitions to handle this volume of data without having a
single big partition that is hard to process if for whatever reasons
you need to full process your cube.
Be careful with aggregation design - you really need preaggregations on
this volume of data.
Marco Russo
http://www.sqlbi.eu
http://www.sqljunkies.com/weblog/sqlbi
Jerome wrote:
> Hi all,
> I'm new to datawarehousing. I've built a DW with two fact tables, each one
> will be added with more than 5 to 10 millions rows each day. Do you think
> that this volume is compatible with Analysis Services 2005 or do I have to
> change the model I use, by creating a fact table for each customer I have
> (for instance).
> Second question: if the awnser of the previous question is that I can keep
> the model I've built, will I have to investigate cube partition ?
> thank's.
> Jerome|||Hi Marco,
What do you mean exactly by "aggregation design"?
What is exactly preaggregation? Do you know some references on the web?
Thanks
Jerome
"Marco Russo" wrote:

> You can work with this volume of data in Analysis Services 2005 without
> changing your schema (and I strongly suggest you to not change your
> schema!).
> You can use partitions to handle this volume of data without having a
> single big partition that is hard to process if for whatever reasons
> you need to full process your cube.
> Be careful with aggregation design - you really need preaggregations on
> this volume of data.
> Marco Russo
> http://www.sqlbi.eu
> http://www.sqljunkies.com/weblog/sqlbi
>
> Jerome wrote:
>|||yeah aggregation % in Analysis Services
-Susie
Jerome wrote:[vbcol=seagreen]
> Hi Marco,
> What do you mean exactly by "aggregation design"?
> What is exactly preaggregation? Do you know some references on the web?
> Thanks
> Jerome
>
> "Marco Russo" wrote:
>|||yeah % aggregation in analysis services
Jerome wrote:[vbcol=seagreen]
> Hi Marco,
> What do you mean exactly by "aggregation design"?
> What is exactly preaggregation? Do you know some references on the web?
> Thanks
> Jerome
>
> "Marco Russo" wrote:
>sql

Advice needed on creating a test environment

I have built a sizable (for me anyway) SQL application for a client on
which they now base their entire business. I need to set up a
parallel installation of the database for development purposes, the
idea being I can make changes to the development version without
impacting day to day operations until it is confirmed that all changes
are working correctly. My problem is this: the application gets
information from their headquarters every night along with information
from employees throughout the day. Every night the system performs an
hour of batch processing on the combined data. In order for this
sandbox to work, I need to be able to import into the test system an
exact copy of the data only on which the various stored procedures are
going to do their work.
It makes sense that the time to copy the data over is after the
poduction system finishes getting its data and before the batch
processing starts.
Is the best way to do this to create a bunch of DTS packages to
replace the development data tables with the production using an
append?
Is it possible to take a backup of prod at the appropriate time and restore
it on your test box?
Tom
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinpub.com
..
"Matthew Speed" <mspeed@.mspeed.net> wrote in message
news:78asg1pda4tvee5ug2kh5o5et09bu8m458@.4ax.com...
I have built a sizable (for me anyway) SQL application for a client on
which they now base their entire business. I need to set up a
parallel installation of the database for development purposes, the
idea being I can make changes to the development version without
impacting day to day operations until it is confirmed that all changes
are working correctly. My problem is this: the application gets
information from their headquarters every night along with information
from employees throughout the day. Every night the system performs an
hour of batch processing on the combined data. In order for this
sandbox to work, I need to be able to import into the test system an
exact copy of the data only on which the various stored procedures are
going to do their work.
It makes sense that the time to copy the data over is after the
poduction system finishes getting its data and before the batch
processing starts.
Is the best way to do this to create a bunch of DTS packages to
replace the development data tables with the production using an
append?
|||On Thu, 25 Aug 2005 21:11:34 -0400, "Tom Moreau"
<tom@.dont.spam.me.cips.ca> wrote:

>Is it possible to take a backup of prod at the appropriate time and restore
>it on your test box?
I don't think I can do that because a lot of my development is on
stored procedures. If I back it up and restore I'd lose my work so I
would have to save off my work, restore the backup and then recreate
my work. Right now I am thinking that my only solution is going to be
using DTS to copy all the data over after it does its nightly
retrieval of data from corporate (it is a moderate amount of data)
|||Hi,
I think in this case you may go for Transactional replication only for
Selected tables.
U can specify the time at which it needs to Copy the data from Production to
Developement.
Any help reply..
Regards,
Herbert
"Matthew Speed" wrote:

> I have built a sizable (for me anyway) SQL application for a client on
> which they now base their entire business. I need to set up a
> parallel installation of the database for development purposes, the
> idea being I can make changes to the development version without
> impacting day to day operations until it is confirmed that all changes
> are working correctly. My problem is this: the application gets
> information from their headquarters every night along with information
> from employees throughout the day. Every night the system performs an
> hour of batch processing on the combined data. In order for this
> sandbox to work, I need to be able to import into the test system an
> exact copy of the data only on which the various stored procedures are
> going to do their work.
> It makes sense that the time to copy the data over is after the
> poduction system finishes getting its data and before the batch
> processing starts.
> Is the best way to do this to create a bunch of DTS packages to
> replace the development data tables with the production using an
> append?
>
|||Why don't you store all your work do disk? I wouldn't dare to work in an environment where my latest
revision is in the database? There are also tools that can compare schemas and create a diff DDL
script based on that. This way, you have your source code in .sql files and after restoring the
backup, you just run that .sql file.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"Matthew Speed" <mspeed@.mspeed.net> wrote in message
news:bm4tg15kphl0966uf4ie311ov86q72jgui@.4ax.com...
> On Thu, 25 Aug 2005 21:11:34 -0400, "Tom Moreau"
> <tom@.dont.spam.me.cips.ca> wrote:
>
> I don't think I can do that because a lot of my development is on
> stored procedures. If I back it up and restore I'd lose my work so I
> would have to save off my work, restore the backup and then recreate
> my work. Right now I am thinking that my only solution is going to be
> using DTS to copy all the data over after it does its nightly
> retrieval of data from corporate (it is a moderate amount of data)
|||On Thu, 25 Aug 2005 16:34:37 -0400, Matthew Speed <mspeed@.mspeed.net>
wrote:
>I have built a sizable (for me anyway) SQL application for a client on
>which they now base their entire business. I need to set up a
>parallel installation of the database for development purposes, the
>idea being I can make changes to the development version without
>impacting day to day operations until it is confirmed that all changes
>are working correctly. My problem is this: the application gets
>information from their headquarters every night along with information
>from employees throughout the day. Every night the system performs an
>hour of batch processing on the combined data. In order for this
>sandbox to work, I need to be able to import into the test system an
>exact copy of the data only on which the various stored procedures are
>going to do their work.
>It makes sense that the time to copy the data over is after the
>poduction system finishes getting its data and before the batch
>processing starts.
>Is the best way to do this to create a bunch of DTS packages to
>replace the development data tables with the production using an
>append?
Well, is that the only change made to the production data, appends?
Probably not!
Herbert's suggestion to consider replication is a good one.
But, do you really need up-to-date data every day? Usually there is
development that goes on for some days or weeks (or months or years),
often to a different schema than production is using (!!!), and only
then is even somewhat up-to-date data needed for testing.
It's pretty easy to script out all your SPs, restore a backup of
production onto your test machine, then script the SPs back in. Or
keep the SPs in a separate database.
Or, arrange some DTS logic sufficient to do your updates on demand,
but probably more than just appends.
Lots of ways to go, but frankly, none of them is painless!
J.

Advice needed on creating a test environment

I have built a sizable (for me anyway) SQL application for a client on
which they now base their entire business. I need to set up a
parallel installation of the database for development purposes, the
idea being I can make changes to the development version without
impacting day to day operations until it is confirmed that all changes
are working correctly. My problem is this: the application gets
information from their headquarters every night along with information
from employees throughout the day. Every night the system performs an
hour of batch processing on the combined data. In order for this
sandbox to work, I need to be able to import into the test system an
exact copy of the data only on which the various stored procedures are
going to do their work.
It makes sense that the time to copy the data over is after the
poduction system finishes getting its data and before the batch
processing starts.
Is the best way to do this to create a bunch of DTS packages to
replace the development data tables with the production using an
append?Is it possible to take a backup of prod at the appropriate time and restore
it on your test box?
--
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinpub.com
.
"Matthew Speed" <mspeed@.mspeed.net> wrote in message
news:78asg1pda4tvee5ug2kh5o5et09bu8m458@.4ax.com...
I have built a sizable (for me anyway) SQL application for a client on
which they now base their entire business. I need to set up a
parallel installation of the database for development purposes, the
idea being I can make changes to the development version without
impacting day to day operations until it is confirmed that all changes
are working correctly. My problem is this: the application gets
information from their headquarters every night along with information
from employees throughout the day. Every night the system performs an
hour of batch processing on the combined data. In order for this
sandbox to work, I need to be able to import into the test system an
exact copy of the data only on which the various stored procedures are
going to do their work.
It makes sense that the time to copy the data over is after the
poduction system finishes getting its data and before the batch
processing starts.
Is the best way to do this to create a bunch of DTS packages to
replace the development data tables with the production using an
append?|||On Thu, 25 Aug 2005 21:11:34 -0400, "Tom Moreau"
<tom@.dont.spam.me.cips.ca> wrote:
>Is it possible to take a backup of prod at the appropriate time and restore
>it on your test box?
I don't think I can do that because a lot of my development is on
stored procedures. If I back it up and restore I'd lose my work so I
would have to save off my work, restore the backup and then recreate
my work. Right now I am thinking that my only solution is going to be
using DTS to copy all the data over after it does its nightly
retrieval of data from corporate (it is a moderate amount of data)|||Hi,
I think in this case you may go for Transactional replication only for
Selected tables.
U can specify the time at which it needs to Copy the data from Production to
Developement.
Any help reply..
Regards,
--
Herbert
"Matthew Speed" wrote:
> I have built a sizable (for me anyway) SQL application for a client on
> which they now base their entire business. I need to set up a
> parallel installation of the database for development purposes, the
> idea being I can make changes to the development version without
> impacting day to day operations until it is confirmed that all changes
> are working correctly. My problem is this: the application gets
> information from their headquarters every night along with information
> from employees throughout the day. Every night the system performs an
> hour of batch processing on the combined data. In order for this
> sandbox to work, I need to be able to import into the test system an
> exact copy of the data only on which the various stored procedures are
> going to do their work.
> It makes sense that the time to copy the data over is after the
> poduction system finishes getting its data and before the batch
> processing starts.
> Is the best way to do this to create a bunch of DTS packages to
> replace the development data tables with the production using an
> append?
>|||Why don't you store all your work do disk? I wouldn't dare to work in an environment where my latest
revision is in the database? There are also tools that can compare schemas and create a diff DDL
script based on that. This way, you have your source code in .sql files and after restoring the
backup, you just run that .sql file.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"Matthew Speed" <mspeed@.mspeed.net> wrote in message
news:bm4tg15kphl0966uf4ie311ov86q72jgui@.4ax.com...
> On Thu, 25 Aug 2005 21:11:34 -0400, "Tom Moreau"
> <tom@.dont.spam.me.cips.ca> wrote:
>>Is it possible to take a backup of prod at the appropriate time and restore
>>it on your test box?
> I don't think I can do that because a lot of my development is on
> stored procedures. If I back it up and restore I'd lose my work so I
> would have to save off my work, restore the backup and then recreate
> my work. Right now I am thinking that my only solution is going to be
> using DTS to copy all the data over after it does its nightly
> retrieval of data from corporate (it is a moderate amount of data)|||On Thu, 25 Aug 2005 16:34:37 -0400, Matthew Speed <mspeed@.mspeed.net>
wrote:
>I have built a sizable (for me anyway) SQL application for a client on
>which they now base their entire business. I need to set up a
>parallel installation of the database for development purposes, the
>idea being I can make changes to the development version without
>impacting day to day operations until it is confirmed that all changes
>are working correctly. My problem is this: the application gets
>information from their headquarters every night along with information
>from employees throughout the day. Every night the system performs an
>hour of batch processing on the combined data. In order for this
>sandbox to work, I need to be able to import into the test system an
>exact copy of the data only on which the various stored procedures are
>going to do their work.
>It makes sense that the time to copy the data over is after the
>poduction system finishes getting its data and before the batch
>processing starts.
>Is the best way to do this to create a bunch of DTS packages to
>replace the development data tables with the production using an
>append?
Well, is that the only change made to the production data, appends?
Probably not!
Herbert's suggestion to consider replication is a good one.
But, do you really need up-to-date data every day? Usually there is
development that goes on for some days or weeks (or months or years),
often to a different schema than production is using (!!!), and only
then is even somewhat up-to-date data needed for testing.
It's pretty easy to script out all your SPs, restore a backup of
production onto your test machine, then script the SPs back in. Or
keep the SPs in a separate database.
Or, arrange some DTS logic sufficient to do your updates on demand,
but probably more than just appends.
Lots of ways to go, but frankly, none of them is painless!
J.

Tuesday, March 6, 2012

Adomd.net and ClickOnce

We need to install ADOMD.NET on client workstations and would like to install it via ClickOnce. Has anyone built a ClickOnce Bootstrapper Prerequisite Package for SQLServer2005_ADOMD.MSI?

Also, when running SQLServer2005_ADOMD.MSI on a client workstation manually, we receive an error stating that an updated version of MSXML 6.0 must be installed as a prerequisite. We run the MSXML6.MSI (11/7/2005) found in the SS05 November 2005 Feature Pack and then run SQLServer2005_ADOMD.MSI and everything is fine. This workstation already has Version 2.0 of the .NET Framework installed on it. Isn't MSXML 6.0 a prerequisite for installing V 2.0 of the .NET framework?

If we need to install the updated version of MSXML 6.0 prior to installing ADOMD.NET, has anyone built a ClickOnce Bootstrapper Prerequisite Package for the updated version of MSXML 6.0?

We realize these components might be part of the SQL Server 2005 Express Edition Prerequisite Package that is available, but would like to avoid an additional 35 mg download if at all possible.

Thanks in advance for your help.

Dont know much about ClickOnce.

Some answers for you:

You are right ADOMD.NET requires MSXML6 to be present. If it is not installed the setup will prompt you.

.NET framework does not include the MSXML 6 component.

I doubt ADOMD.NET is part of SQL Express installation package.

Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

Friday, February 24, 2012

ADO.NET and SQL Server Data Paging

I had to built a custom data paging control for my asp.net app and SQL
Server.
This is what I'm doing:
1. do a SELECT to a SqlDataReader
2. create a DataTable with data from GetSchema()
3. loop trough the datareader using .Read() to the point I want to start
4. add data to DataTable with max of [pageSize] records
5. close the data reader and return
The question is:
Isn't looping reader.Read() doing nothing to go to start point where I
want start getting data too expensive? Although DataReader is fast,
won't ".Read()" get the row data from the server to the client? If so,
if I need to get data from record 10000 ahead, it is actually reading
all 10 thousand rows of data, so it is a big waste of resources.
Maybe someone who knows more about the internals of the DataReader can
answer this.
Since SQL Server doesn't support "LIMIT" like keywords, if there is a
better way to page data in sql server, I would like to know.
Thanks.
Provided you are using an incrementing number, like an Identity column, you
can store first and last and then use a DataSet and bind (avoid reader to
fill a DataTable and make the code a bit simpler). If you really like using a
Reader (MS does under the hood), it is not a problem.
Using the "pointers" you are storing, you can easily request data from that
point. If you like persisting the data rather than requesting each time, read
the DataTable as XML and pull the subset you want to paint on the screen.
Gregory A. Beamer
MVP; MCP: +I, SE, SD, DBA
***************************
Think Outside the Box!
***************************
"Natan Vivo" wrote:

> I had to built a custom data paging control for my asp.net app and SQL
> Server.
> This is what I'm doing:
> 1. do a SELECT to a SqlDataReader
> 2. create a DataTable with data from GetSchema()
> 3. loop trough the datareader using .Read() to the point I want to start
> 4. add data to DataTable with max of [pageSize] records
> 5. close the data reader and return
> The question is:
> Isn't looping reader.Read() doing nothing to go to start point where I
> want start getting data too expensive? Although DataReader is fast,
> won't ".Read()" get the row data from the server to the client? If so,
> if I need to get data from record 10000 ahead, it is actually reading
> all 10 thousand rows of data, so it is a big waste of resources.
> Maybe someone who knows more about the internals of the DataReader can
> answer this.
> Since SQL Server doesn't support "LIMIT" like keywords, if there is a
> better way to page data in sql server, I would like to know.
> Thanks.
>
|||You can think about following query:
SELECT TOP PAGE_SIZE * FROM TABLE_NAME WHERE KEY_ID > last_ID ORDER BY KEY_ID
HTH
Elton Wang
elton_wang@.hotmail.com
"Natan Vivo" wrote:

> I had to built a custom data paging control for my asp.net app and SQL
> Server.
> This is what I'm doing:
> 1. do a SELECT to a SqlDataReader
> 2. create a DataTable with data from GetSchema()
> 3. loop trough the datareader using .Read() to the point I want to start
> 4. add data to DataTable with max of [pageSize] records
> 5. close the data reader and return
> The question is:
> Isn't looping reader.Read() doing nothing to go to start point where I
> want start getting data too expensive? Although DataReader is fast,
> won't ".Read()" get the row data from the server to the client? If so,
> if I need to get data from record 10000 ahead, it is actually reading
> all 10 thousand rows of data, so it is a big waste of resources.
> Maybe someone who knows more about the internals of the DataReader can
> answer this.
> Since SQL Server doesn't support "LIMIT" like keywords, if there is a
> better way to page data in sql server, I would like to know.
> Thanks.
>
|||Take a look at:
http://www.aspfaq.com/show.asp?id=2120
Rick Sawtell
MCT, MCSD, MCDBA
|||Check out http://www.aspfaq.com/show.asp?id=2120 for various paging methods
Personally, I build parameterized dynamic SQL on the client side using the
query technique below. The primary key (or unique columns) are specified in
the WHERE and ORDER BY clauses.
use Northwind
--first page
SELECT TOP 10 OrderID, CustomerID, OrderDate
FROM Orders
ORDER BY OrderID ASC
--next page
SELECT TOP 10 OrderID, CustomerID, OrderDate
FROM Orders
WHERE OrderID > 10257 --last OrderID from currently displayed page
ORDER BY OrderID ASC
--prev page
SELECT OrderID, CustomerID, OrderDate
FROM
(
SELECT TOP 10 OrderID, CustomerID, OrderDate
FROM Orders
WHERE OrderID < 10258 --first OrderID from currently displayed page
ORDER BY OrderID DESC
) AS a
ORDER BY OrderID ASC
Hope this helps.
Dan Guzman
SQL Server MVP
"Natan Vivo" <nvivo@.terra.com.br> wrote in message
news:u8ZuMXznFHA.3256@.TK2MSFTNGP12.phx.gbl...
>I had to built a custom data paging control for my asp.net app and SQL
>Server.
> This is what I'm doing:
> 1. do a SELECT to a SqlDataReader
> 2. create a DataTable with data from GetSchema()
> 3. loop trough the datareader using .Read() to the point I want to start
> 4. add data to DataTable with max of [pageSize] records
> 5. close the data reader and return
> The question is:
> Isn't looping reader.Read() doing nothing to go to start point where I
> want start getting data too expensive? Although DataReader is fast, won't
> ".Read()" get the row data from the server to the client? If so, if I need
> to get data from record 10000 ahead, it is actually reading all 10
> thousand rows of data, so it is a big waste of resources.
> Maybe someone who knows more about the internals of the DataReader can
> answer this.
> Since SQL Server doesn't support "LIMIT" like keywords, if there is a
> better way to page data in sql server, I would like to know.
> Thanks.
|||You can use this
Dim oConnection As New SQLConnection("Provider...
oConnection.Open
Dim oDataAdapter As New SqlDataAdapter("SELECT ...", oConnection)
Dim oDataSet As DataSet = New DataSet
oDataAdapter.Fill(oDataSet, CurrentPageSize, PageSize, mstrTableName)
If you have 400 records and you need the first 100 records then
CurrentPageSize = 0 and PageSize = 100
If you need the next 100 records then CurrentPageSize = 100 and PageSize =
100
Good luck
Marc R.
"Natan Vivo" <nvivo@.terra.com.br> a crit dans le message de news:
u8ZuMXznFHA.3256@.TK2MSFTNGP12.phx.gbl...
>I had to built a custom data paging control for my asp.net app and SQL
>Server.
> This is what I'm doing:
> 1. do a SELECT to a SqlDataReader
> 2. create a DataTable with data from GetSchema()
> 3. loop trough the datareader using .Read() to the point I want to start
> 4. add data to DataTable with max of [pageSize] records
> 5. close the data reader and return
> The question is:
> Isn't looping reader.Read() doing nothing to go to start point where I
> want start getting data too expensive? Although DataReader is fast, won't
> ".Read()" get the row data from the server to the client? If so, if I need
> to get data from record 10000 ahead, it is actually reading all 10
> thousand rows of data, so it is a big waste of resources.
> Maybe someone who knows more about the internals of the DataReader can
> answer this.
> Since SQL Server doesn't support "LIMIT" like keywords, if there is a
> better way to page data in sql server, I would like to know.
> Thanks.
|||Cowboy (Gregory A. Beamer) - MVP wrote:
> Provided you are using an incrementing number, like an Identity column, you
No, I am not. That is why I needed to create my own pagination. There is
no way to determine what is beeing paginated neither the type of the
field in order by.. It may be text, may be number...
Does anyone know if the reader.Read() will actually read the data? if
yes, than probably there is no better way to do this. My inplementation
is pretty like using dataset, saving that i don't need to instantiate a
DataAdaper or neither a DataSet..
Thanks.

ADO.NET and SQL Server Data Paging

I had to built a custom data paging control for my asp.net app and SQL
Server.
This is what I'm doing:
1. do a SELECT to a SqlDataReader
2. create a DataTable with data from GetSchema()
3. loop trough the datareader using .Read() to the point I want to start
4. add data to DataTable with max of [pageSize] records
5. close the data reader and return
The question is:
Isn't looping reader.Read() doing nothing to go to start point where I
want start getting data too expensive? Although DataReader is fast,
won't ".Read()" get the row data from the server to the client? If so,
if I need to get data from record 10000 ahead, it is actually reading
all 10 thousand rows of data, so it is a big waste of resources.
Maybe someone who knows more about the internals of the DataReader can
answer this.
Since SQL Server doesn't support "LIMIT" like keywords, if there is a
better way to page data in sql server, I would like to know.
Thanks.Provided you are using an incrementing number, like an Identity column, you
can store first and last and then use a DataSet and bind (avoid reader to
fill a DataTable and make the code a bit simpler). If you really like using a
Reader (MS does under the hood), it is not a problem.
Using the "pointers" you are storing, you can easily request data from that
point. If you like persisting the data rather than requesting each time, read
the DataTable as XML and pull the subset you want to paint on the screen.
--
Gregory A. Beamer
MVP; MCP: +I, SE, SD, DBA
***************************
Think Outside the Box!
***************************
"Natan Vivo" wrote:
> I had to built a custom data paging control for my asp.net app and SQL
> Server.
> This is what I'm doing:
> 1. do a SELECT to a SqlDataReader
> 2. create a DataTable with data from GetSchema()
> 3. loop trough the datareader using .Read() to the point I want to start
> 4. add data to DataTable with max of [pageSize] records
> 5. close the data reader and return
> The question is:
> Isn't looping reader.Read() doing nothing to go to start point where I
> want start getting data too expensive? Although DataReader is fast,
> won't ".Read()" get the row data from the server to the client? If so,
> if I need to get data from record 10000 ahead, it is actually reading
> all 10 thousand rows of data, so it is a big waste of resources.
> Maybe someone who knows more about the internals of the DataReader can
> answer this.
> Since SQL Server doesn't support "LIMIT" like keywords, if there is a
> better way to page data in sql server, I would like to know.
> Thanks.
>|||You can think about following query:
SELECT TOP PAGE_SIZE * FROM TABLE_NAME WHERE KEY_ID > last_ID ORDER BY KEY_ID
HTH
Elton Wang
elton_wang@.hotmail.com
"Natan Vivo" wrote:
> I had to built a custom data paging control for my asp.net app and SQL
> Server.
> This is what I'm doing:
> 1. do a SELECT to a SqlDataReader
> 2. create a DataTable with data from GetSchema()
> 3. loop trough the datareader using .Read() to the point I want to start
> 4. add data to DataTable with max of [pageSize] records
> 5. close the data reader and return
> The question is:
> Isn't looping reader.Read() doing nothing to go to start point where I
> want start getting data too expensive? Although DataReader is fast,
> won't ".Read()" get the row data from the server to the client? If so,
> if I need to get data from record 10000 ahead, it is actually reading
> all 10 thousand rows of data, so it is a big waste of resources.
> Maybe someone who knows more about the internals of the DataReader can
> answer this.
> Since SQL Server doesn't support "LIMIT" like keywords, if there is a
> better way to page data in sql server, I would like to know.
> Thanks.
>|||Take a look at:
http://www.aspfaq.com/show.asp?id=2120
Rick Sawtell
MCT, MCSD, MCDBA|||Check out http://www.aspfaq.com/show.asp?id=2120 for various paging methods
Personally, I build parameterized dynamic SQL on the client side using the
query technique below. The primary key (or unique columns) are specified in
the WHERE and ORDER BY clauses.
use Northwind
--first page
SELECT TOP 10 OrderID, CustomerID, OrderDate
FROM Orders
ORDER BY OrderID ASC
--next page
SELECT TOP 10 OrderID, CustomerID, OrderDate
FROM Orders
WHERE OrderID > 10257 --last OrderID from currently displayed page
ORDER BY OrderID ASC
--prev page
SELECT OrderID, CustomerID, OrderDate
FROM
(
SELECT TOP 10 OrderID, CustomerID, OrderDate
FROM Orders
WHERE OrderID < 10258 --first OrderID from currently displayed page
ORDER BY OrderID DESC
) AS a
ORDER BY OrderID ASC
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Natan Vivo" <nvivo@.terra.com.br> wrote in message
news:u8ZuMXznFHA.3256@.TK2MSFTNGP12.phx.gbl...
>I had to built a custom data paging control for my asp.net app and SQL
>Server.
> This is what I'm doing:
> 1. do a SELECT to a SqlDataReader
> 2. create a DataTable with data from GetSchema()
> 3. loop trough the datareader using .Read() to the point I want to start
> 4. add data to DataTable with max of [pageSize] records
> 5. close the data reader and return
> The question is:
> Isn't looping reader.Read() doing nothing to go to start point where I
> want start getting data too expensive? Although DataReader is fast, won't
> ".Read()" get the row data from the server to the client? If so, if I need
> to get data from record 10000 ahead, it is actually reading all 10
> thousand rows of data, so it is a big waste of resources.
> Maybe someone who knows more about the internals of the DataReader can
> answer this.
> Since SQL Server doesn't support "LIMIT" like keywords, if there is a
> better way to page data in sql server, I would like to know.
> Thanks.|||You can use this
Dim oConnection As New SQLConnection("Provider...
oConnection.Open
Dim oDataAdapter As New SqlDataAdapter("SELECT ...", oConnection)
Dim oDataSet As DataSet = New DataSet
oDataAdapter.Fill(oDataSet, CurrentPageSize, PageSize, mstrTableName)
If you have 400 records and you need the first 100 records then
CurrentPageSize = 0 and PageSize = 100
If you need the next 100 records then CurrentPageSize = 100 and PageSize =100
Good luck
Marc R.
"Natan Vivo" <nvivo@.terra.com.br> a écrit dans le message de news:
u8ZuMXznFHA.3256@.TK2MSFTNGP12.phx.gbl...
>I had to built a custom data paging control for my asp.net app and SQL
>Server.
> This is what I'm doing:
> 1. do a SELECT to a SqlDataReader
> 2. create a DataTable with data from GetSchema()
> 3. loop trough the datareader using .Read() to the point I want to start
> 4. add data to DataTable with max of [pageSize] records
> 5. close the data reader and return
> The question is:
> Isn't looping reader.Read() doing nothing to go to start point where I
> want start getting data too expensive? Although DataReader is fast, won't
> ".Read()" get the row data from the server to the client? If so, if I need
> to get data from record 10000 ahead, it is actually reading all 10
> thousand rows of data, so it is a big waste of resources.
> Maybe someone who knows more about the internals of the DataReader can
> answer this.
> Since SQL Server doesn't support "LIMIT" like keywords, if there is a
> better way to page data in sql server, I would like to know.
> Thanks.|||Cowboy (Gregory A. Beamer) - MVP wrote:
> Provided you are using an incrementing number, like an Identity column, you
No, I am not. That is why I needed to create my own pagination. There is
no way to determine what is beeing paginated neither the type of the
field in order by.. It may be text, may be number...
Does anyone know if the reader.Read() will actually read the data? if
yes, than probably there is no better way to do this. My inplementation
is pretty like using dataset, saving that i don't need to instantiate a
DataAdaper or neither a DataSet..
Thanks.

ADO.NET and SQL Server Data Paging

I had to built a custom data paging control for my asp.net app and SQL
Server.
This is what I'm doing:
1. do a SELECT to a SqlDataReader
2. create a DataTable with data from GetSchema()
3. loop trough the datareader using .Read() to the point I want to start
4. add data to DataTable with max of [pageSize] records
5. close the data reader and return
The question is:
Isn't looping reader.Read() doing nothing to go to start point where I
want start getting data too expensive? Although DataReader is fast,
won't ".Read()" get the row data from the server to the client? If so,
if I need to get data from record 10000 ahead, it is actually reading
all 10 thousand rows of data, so it is a big waste of resources.
Maybe someone who knows more about the internals of the DataReader can
answer this.
Since SQL Server doesn't support "LIMIT" like keywords, if there is a
better way to page data in sql server, I would like to know.
Thanks.Provided you are using an incrementing number, like an Identity column, you
can store first and last and then use a DataSet and bind (avoid reader to
fill a DataTable and make the code a bit simpler). If you really like using
a
Reader (MS does under the hood), it is not a problem.
Using the "pointers" you are storing, you can easily request data from that
point. If you like persisting the data rather than requesting each time, rea
d
the DataTable as XML and pull the subset you want to paint on the screen.
Gregory A. Beamer
MVP; MCP: +I, SE, SD, DBA
***************************
Think Outside the Box!
***************************
"Natan Vivo" wrote:

> I had to built a custom data paging control for my asp.net app and SQL
> Server.
> This is what I'm doing:
> 1. do a SELECT to a SqlDataReader
> 2. create a DataTable with data from GetSchema()
> 3. loop trough the datareader using .Read() to the point I want to start
> 4. add data to DataTable with max of [pageSize] records
> 5. close the data reader and return
> The question is:
> Isn't looping reader.Read() doing nothing to go to start point where I
> want start getting data too expensive? Although DataReader is fast,
> won't ".Read()" get the row data from the server to the client? If so,
> if I need to get data from record 10000 ahead, it is actually reading
> all 10 thousand rows of data, so it is a big waste of resources.
> Maybe someone who knows more about the internals of the DataReader can
> answer this.
> Since SQL Server doesn't support "LIMIT" like keywords, if there is a
> better way to page data in sql server, I would like to know.
> Thanks.
>|||You can think about following query:
SELECT TOP PAGE_SIZE * FROM TABLE_NAME WHERE KEY_ID > last_ID ORDER BY KEY_
ID
HTH
Elton Wang
elton_wang@.hotmail.com
"Natan Vivo" wrote:

> I had to built a custom data paging control for my asp.net app and SQL
> Server.
> This is what I'm doing:
> 1. do a SELECT to a SqlDataReader
> 2. create a DataTable with data from GetSchema()
> 3. loop trough the datareader using .Read() to the point I want to start
> 4. add data to DataTable with max of [pageSize] records
> 5. close the data reader and return
> The question is:
> Isn't looping reader.Read() doing nothing to go to start point where I
> want start getting data too expensive? Although DataReader is fast,
> won't ".Read()" get the row data from the server to the client? If so,
> if I need to get data from record 10000 ahead, it is actually reading
> all 10 thousand rows of data, so it is a big waste of resources.
> Maybe someone who knows more about the internals of the DataReader can
> answer this.
> Since SQL Server doesn't support "LIMIT" like keywords, if there is a
> better way to page data in sql server, I would like to know.
> Thanks.
>|||Take a look at:
http://www.aspfaq.com/show.asp?id=2120
Rick Sawtell
MCT, MCSD, MCDBA|||Check out http://www.aspfaq.com/show.asp?id=2120 for various paging methods
Personally, I build parameterized dynamic SQL on the client side using the
query technique below. The primary key (or unique columns) are specified in
the WHERE and ORDER BY clauses.
use Northwind
--first page
SELECT TOP 10 OrderID, CustomerID, OrderDate
FROM Orders
ORDER BY OrderID ASC
--next page
SELECT TOP 10 OrderID, CustomerID, OrderDate
FROM Orders
WHERE OrderID > 10257 --last OrderID from currently displayed page
ORDER BY OrderID ASC
--prev page
SELECT OrderID, CustomerID, OrderDate
FROM
(
SELECT TOP 10 OrderID, CustomerID, OrderDate
FROM Orders
WHERE OrderID < 10258 --first OrderID from currently displayed page
ORDER BY OrderID DESC
) AS a
ORDER BY OrderID ASC
Hope this helps.
Dan Guzman
SQL Server MVP
"Natan Vivo" <nvivo@.terra.com.br> wrote in message
news:u8ZuMXznFHA.3256@.TK2MSFTNGP12.phx.gbl...
>I had to built a custom data paging control for my asp.net app and SQL
>Server.
> This is what I'm doing:
> 1. do a SELECT to a SqlDataReader
> 2. create a DataTable with data from GetSchema()
> 3. loop trough the datareader using .Read() to the point I want to start
> 4. add data to DataTable with max of [pageSize] records
> 5. close the data reader and return
> The question is:
> Isn't looping reader.Read() doing nothing to go to start point where I
> want start getting data too expensive? Although DataReader is fast, won't
> ".Read()" get the row data from the server to the client? If so, if I need
> to get data from record 10000 ahead, it is actually reading all 10
> thousand rows of data, so it is a big waste of resources.
> Maybe someone who knows more about the internals of the DataReader can
> answer this.
> Since SQL Server doesn't support "LIMIT" like keywords, if there is a
> better way to page data in sql server, I would like to know.
> Thanks.|||You can use this
Dim oConnection As New SQLConnection("Provider...
oConnection.Open
Dim oDataAdapter As New SqlDataAdapter("SELECT ...", oConnection)
Dim oDataSet As DataSet = New DataSet
oDataAdapter.Fill(oDataSet, CurrentPageSize, PageSize, mstrTableName)
If you have 400 records and you need the first 100 records then
CurrentPageSize = 0 and PageSize = 100
If you need the next 100 records then CurrentPageSize = 100 and PageSize =
100
Good luck
Marc R.
"Natan Vivo" <nvivo@.terra.com.br> a crit dans le message de news:
u8ZuMXznFHA.3256@.TK2MSFTNGP12.phx.gbl...
>I had to built a custom data paging control for my asp.net app and SQL
>Server.
> This is what I'm doing:
> 1. do a SELECT to a SqlDataReader
> 2. create a DataTable with data from GetSchema()
> 3. loop trough the datareader using .Read() to the point I want to start
> 4. add data to DataTable with max of [pageSize] records
> 5. close the data reader and return
> The question is:
> Isn't looping reader.Read() doing nothing to go to start point where I
> want start getting data too expensive? Although DataReader is fast, won't
> ".Read()" get the row data from the server to the client? If so, if I need
> to get data from record 10000 ahead, it is actually reading all 10
> thousand rows of data, so it is a big waste of resources.
> Maybe someone who knows more about the internals of the DataReader can
> answer this.
> Since SQL Server doesn't support "LIMIT" like keywords, if there is a
> better way to page data in sql server, I would like to know.
> Thanks.|||Cowboy (Gregory A. Beamer) - MVP wrote:[vbcol=seagreen]
> Provided you are using an incrementing number, like an Identity column, you[/vbcol
]
No, I am not. That is why I needed to create my own pagination. There is
no way to determine what is beeing paginated neither the type of the
field in order by.. It may be text, may be number...
Does anyone know if the reader.Read() will actually read the data? if
yes, than probably there is no better way to do this. My inplementation
is pretty like using dataset, saving that i don't need to instantiate a
DataAdaper or neither a DataSet..
Thanks.

Thursday, February 16, 2012

ADO error (syntax error or access violation)

I made a union all between two query's in I've built all of the queries in
query analyzer without any problem but in reportingssvcs I get a ADO error
(syntax error or access violation) Does any one know why?Can you give an example and repro steps for Northwind or AdventureWorks?
Send to me directly and I will try it.
--
| From: "Marco van de Kraats" <marcovdkNoSpam999@.dbs.nl>
| Subject: ADO error (syntax error or access violation)
| Date: Thu, 26 May 2005 13:56:40 +0200
| Lines: 6
| X-Priority: 3
| X-MSMail-Priority: Normal
| X-Newsreader: Microsoft Outlook Express 6.00.2900.2180
| X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.2180
| X-RFC2646: Format=Flowed; Original
| Message-ID: <O6Hr7neYFHA.2956@.TK2MSFTNGP10.phx.gbl>
| Newsgroups: microsoft.public.sqlserver.reportingsvcs
| NNTP-Posting-Host: 62.58.60.27
| Path: TK2MSFTNGXA01.phx.gbl!TK2MSFTNGP08.phx.gbl!TK2MSFTNGP10.phx.gbl
| Xref: TK2MSFTNGXA01.phx.gbl microsoft.public.sqlserver.reportingsvcs:44797
| X-Tomcat-NG: microsoft.public.sqlserver.reportingsvcs
|
| I made a union all between two query's in I've built all of the queries
in
| query analyzer without any problem but in reportingssvcs I get a ADO
error
| (syntax error or access violation) Does any one know why?
|
|
|
||||I am having the same problem where I get an "ADO error: syntax error or
access violation" when I try to run my query. Here is my query:
SELECT Table1.Name,
(SELECT Table2.Billings
WHERE Table2.Date >= CONVERT(datetime,
@.StartDate) AND Table2.Date <= CONVERT(datetime, @.EndDate))
AS Billings
FROM Table2 RIGHT OUTER JOIN
Table1 ON Table2.Name = Table1.Name
GROUP BY Table1.Name, Table2.Date, Table2.Billings
""Brad Syputa - MS"" wrote:
> Can you give an example and repro steps for Northwind or AdventureWorks?
> Send to me directly and I will try it.
> --
> | From: "Marco van de Kraats" <marcovdkNoSpam999@.dbs.nl>
> | Subject: ADO error (syntax error or access violation)
> | Date: Thu, 26 May 2005 13:56:40 +0200
> | Lines: 6
> | X-Priority: 3
> | X-MSMail-Priority: Normal
> | X-Newsreader: Microsoft Outlook Express 6.00.2900.2180
> | X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.2180
> | X-RFC2646: Format=Flowed; Original
> | Message-ID: <O6Hr7neYFHA.2956@.TK2MSFTNGP10.phx.gbl>
> | Newsgroups: microsoft.public.sqlserver.reportingsvcs
> | NNTP-Posting-Host: 62.58.60.27
> | Path: TK2MSFTNGXA01.phx.gbl!TK2MSFTNGP08.phx.gbl!TK2MSFTNGP10.phx.gbl
> | Xref: TK2MSFTNGXA01.phx.gbl microsoft.public.sqlserver.reportingsvcs:44797
> | X-Tomcat-NG: microsoft.public.sqlserver.reportingsvcs
> |
> | I made a union all between two query's in I've built all of the queries
> in
> | query analyzer without any problem but in reportingssvcs I get a ADO
> error
> | (syntax error or access violation) Does any one know why?
> |
> |
> |
> |
>|||I found out that:
- if you use a parameter in the second query the error occurd if the quey is
complex(no example)
- if you use a parameter in both first and the second query the error occurd
if the quey is complex(no example)
- if you use a parameter in the first query or second and you wil use a
UNION ALL statement a error occurd
See the example's
On northwind WITH error
SELECT OrderDate, OrderID, 0 AS bel
FROM Orders
WHERE (OrderID = @.orderID)
UNION ALL
SELECT 2, 0 AS fiets, EmployeeID AS bel
FROM Employees
On northwind WITH error
SELECT OrderDate, OrderID, 0 AS bel
FROM Orders
WHERE (OrderID = @.orderID)
UNION ALL
SELECT 2, 0 AS fiets, EmployeeID AS bel
FROM Employees
WHERE (EmployeeID = @.orderID)
On northwind WITH error
SELECT OrderDate, OrderID, 0 AS bel
FROM Orders
WHERE (OrderID = 1)
UNION ALL
SELECT 2, 0 AS fiets, EmployeeID AS bel
FROM Employees
WHERE (EmployeeID = @.orderID)
On northwind NO error
SELECT OrderDate, OrderID, 0 AS bel
FROM Orders
WHERE (OrderID = @.orderID)
UNION
SELECT 2, 0 AS fiets, EmployeeID AS bel
FROM Employees
On northwind NO error
SELECT OrderDate, OrderID, 0 AS bel
FROM Orders
WHERE (OrderID = 1)
UNION ALL
SELECT 2, 0 AS fiets, EmployeeID AS bel
FROM Employees
On northwind NO error
SELECT OrderDate, OrderID, 0 AS bel
FROM Orders
WHERE (OrderID = 1)
UNION
SELECT 2, 0 AS fiets, EmployeeID AS bel
FROM Employees
WHERE (EmployeeID = @.orderID)
I found out that if you use a parameter in the second query the error occurd
So for example
""Brad Syputa - MS"" <bradsy@.Online.Microsoft.com> wrote in message
news:LVe3MZiZFHA.3928@.TK2MSFTNGXA01.phx.gbl...
> Can you give an example and repro steps for Northwind or AdventureWorks?
> Send to me directly and I will try it.
> --
> | From: "Marco van de Kraats" <marcovdkNoSpam999@.dbs.nl>
> | Subject: ADO error (syntax error or access violation)
> | Date: Thu, 26 May 2005 13:56:40 +0200
> | Lines: 6
> | X-Priority: 3
> | X-MSMail-Priority: Normal
> | X-Newsreader: Microsoft Outlook Express 6.00.2900.2180
> | X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.2180
> | X-RFC2646: Format=Flowed; Original
> | Message-ID: <O6Hr7neYFHA.2956@.TK2MSFTNGP10.phx.gbl>
> | Newsgroups: microsoft.public.sqlserver.reportingsvcs
> | NNTP-Posting-Host: 62.58.60.27
> | Path: TK2MSFTNGXA01.phx.gbl!TK2MSFTNGP08.phx.gbl!TK2MSFTNGP10.phx.gbl
> | Xref: TK2MSFTNGXA01.phx.gbl
> microsoft.public.sqlserver.reportingsvcs:44797
> | X-Tomcat-NG: microsoft.public.sqlserver.reportingsvcs
> |
> | I made a union all between two query's in I've built all of the queries
> in
> | query analyzer without any problem but in reportingssvcs I get a ADO
> error
> | (syntax error or access violation) Does any one know why?
> |
> |
> |
> |
>