Saturday, February 25, 2012
Anonymous access in Reporting Services
site and I am a little confused about the best way to allow anonymous
access. I would like to allow annonymous users to run reports but not set
properties, create subscriptions, or upload reports. I have seen some posts
that indicate that this is difficult to do correctly and that I should build
a custom authentication module. What is the state of the art here? Does
using SP1 or waiting for SP2 help?
Thanks,
SteveYou can assign a user to the anonymous account. Give this user just enough
rights to do what you want.
--
| From: "Stephen Walch" <swalch@.online.nospam>
| Subject: Anonymous access in Reporting Services
| Date: Mon, 14 Feb 2005 11:35:35 -0500
| Lines: 13
| X-Priority: 3
| X-MSMail-Priority: Normal
| X-Newsreader: Microsoft Outlook Express 6.00.2900.2180
| X-RFC2646: Format=Flowed; Original
| X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.2180
| Message-ID: <OoCR3MrEFHA.2568@.TK2MSFTNGP10.phx.gbl>
| Newsgroups: microsoft.public.sqlserver.reportingsvcs
| NNTP-Posting-Host: 69-164-66-20.lndnnh.adelphia.net 69.164.66.20
| Path: TK2MSFTNGXA01.phx.gbl!TK2MSFTNGP08.phx.gbl!TK2MSFTNGP10.phx.gbl
| Xref: TK2MSFTNGXA01.phx.gbl microsoft.public.sqlserver.reportingsvcs:35873
| X-Tomcat-NG: microsoft.public.sqlserver.reportingsvcs
|
| Trying to set up a Reporting Services / Windows Sharepoint services demo
| site and I am a little confused about the best way to allow anonymous
| access. I would like to allow annonymous users to run reports but not
set
| properties, create subscriptions, or upload reports. I have seen some
posts
| that indicate that this is difficult to do correctly and that I should
build
| a custom authentication module. What is the state of the art here? Does
| using SP1 or waiting for SP2 help?
|
| Thanks,
|
| Steve
|
|
|
Anonymous Access from Pivot Table to OLAP 2005
My asp-net web application connect to Analysis Services 2005 through Pivot Table client.
To use anonymous access with AS2000 i used to add this property to the connection string of the pivot: "SSPI=anonymous".
Which property give me the same thing with AS2005?
Thanks!
Hi,
The SSPI connection property is still there, so it should work the same. Are you getting an error?
David Botzenhart
|||Hey David - good to hear from you !
Yes, you could still use SSPI=anonymous, but I really recommend in AS2005 to use standard OLEDB property for this purpose, i.e.
Protection Level=none
It has slightly better (more standard) behavior.
Mosha (http://www.mosha.com/msolap )
|||Thanks, it's working.
But, if i set "Impersonation Level" property to Anonymous? Is this the same of SSPI=anonymous?
Anonymous access
Reports, but I've seen suggestions for work arounds. What are the best
options for allowing Users access to Reports Services from a different
domain, apart from requiring them to enter user credentials every time
they point their browser at the ReportsServer. There seems to be
suggestions that with some tweaking Anonymous users can run reports.
I'm invoking reports from a C# desktop app by building the url - is
there some way to pass the credentials in the url? I know how to
generate reports entirely from C# code(and pass credentials), but that
loses the interactive functionality of the browser interface...
brian smithThis doesn't completely answer your question but it does address "anonymous
access"
http://devguy.com/bb/viewtopic.php?p=1773
> I'm invoking reports from a C# desktop app by building the url - is there
> some way to pass the credentials in the url?
Not trivially.
"Brian Smith" <bsmith@.NO.SPAM.schemiotics.co.uk> wrote in message
news:O15zpaO8FHA.3636@.TK2MSFTNGP09.phx.gbl...
> Officially the line seems to be that only accredited users can access
> Reports, but I've seen suggestions for work arounds. What are the best
> options for allowing Users access to Reports Services from a different
> domain, apart from requiring them to enter user credentials every time
> they point their browser at the ReportsServer. There seems to be
> suggestions that with some tweaking Anonymous users can run reports.
> I'm invoking reports from a C# desktop app by building the url - is there
> some way to pass the credentials in the url? I know how to generate
> reports entirely from C# code(and pass credentials), but that loses the
> interactive functionality of the browser interface...
> brian smith
Anonimous subscription: could not make it work
here is how the computers are connected:
2 iNet Computers with the 1433 port redirected to other computers on the local network wich have SQL Server; the synchronization is made between these inet computers; when establishing connection to distributor I get this message: the SQL Server didn't exist or access is not allowed;
the push subscription works but i need an anonimous subscription.
what is the problem and how i can fix it?
is this a security problem?
any questions are welcomeread books on line...very good for this
Also, a publication needs to have the correct people given access to it.
Cheers
SG
Anomalous heavy reads when heavily loaded
behaviors - small (or large) SPs that run fine on all data values when
run as single transactions, but which repeatably runs with 1000x more
reads in at least one place on a trace.
We are producing the load by reruning a trace via profiler, so we
expect the test box to be maxed out, and it is.
When I say "repeatedly", I have not verified that it's always the same
transactions that are running badly. Yes, of course it *may* be the
overall state of the database at that point in time that "is the
problem", but at least one of these SPs is a fairly simple piece of
business that shouldn't be that sensitive to ANY possible data
configurations.
It has been suggested that it is an optimizer FEATURE that it will
produce different plans when it sees the system is heavily loaded. I
was not aware of that. Can anybody tell me just what it has to see to
decide on this load? I heard it might be CPU, and synthesized some
bogus CPU loads to see if that would make my little SPs misbehave, but
so far that has not worked - the SP (the simplest one of about a dozen
I've seen doing this so far) insists on running in a normal 100 reads
instead of the 1.7m reads it displays occassionally in the trace.
Just looking for anyone who has even seen similar anomalies when
testing under load, I'm not even asking for further diagnostics,
though anything you have I'd like to hear.
Thanks.
Josh
Hi
Have you read the section on parameter sniffing in Ken Henderson's
"The Guru's Guide to SQL Server Architecture and Internals" ISBN
0-201-70047-6
also check out the post http://tinyurl.com/983uf
John
"jxstern" wrote:
> In a large-scale tuning exercise we are repeatedly seeing anomalous
> behaviors - small (or large) SPs that run fine on all data values when
> run as single transactions, but which repeatably runs with 1000x more
> reads in at least one place on a trace.
> We are producing the load by reruning a trace via profiler, so we
> expect the test box to be maxed out, and it is.
> When I say "repeatedly", I have not verified that it's always the same
> transactions that are running badly. Yes, of course it *may* be the
> overall state of the database at that point in time that "is the
> problem", but at least one of these SPs is a fairly simple piece of
> business that shouldn't be that sensitive to ANY possible data
> configurations.
> It has been suggested that it is an optimizer FEATURE that it will
> produce different plans when it sees the system is heavily loaded. I
> was not aware of that. Can anybody tell me just what it has to see to
> decide on this load? I heard it might be CPU, and synthesized some
> bogus CPU loads to see if that would make my little SPs misbehave, but
> so far that has not worked - the SP (the simplest one of about a dozen
> I've seen doing this so far) insists on running in a normal 100 reads
> instead of the 1.7m reads it displays occassionally in the trace.
> Just looking for anyone who has even seen similar anomalies when
> testing under load, I'm not even asking for further diagnostics,
> though anything you have I'd like to hear.
> Thanks.
> Josh
>
|||On Thu, 20 Oct 2005 00:50:03 -0700, John Bell
<jbellnewsposts@.hotmail.com> wrote:
>Have you read the section on parameter sniffing in Ken Henderson's
>"The Guru's Guide to SQL Server Architecture and Internals" ISBN
>0-201-70047-6
>also check out the post http://tinyurl.com/983uf
I'm aware of sniffing, but I don't see how it can be that when the
same parameters run later give (very!) different runtimes.
(benchmark process does not show or validate results).
J.
|||Hi
Have you looked at the locking events in SQL Profiler?
John
"jxstern" wrote:
> On Thu, 20 Oct 2005 00:50:03 -0700, John Bell
> <jbellnewsposts@.hotmail.com> wrote:
> I'm aware of sniffing, but I don't see how it can be that when the
> same parameters run later give (very!) different runtimes.
> (benchmark process does not show or validate results).
> J.
>
|||Have not done that, but I'm curious, could that possibly explain the
situation?
What lock-checking we've done in the way of investigating performance
has shown locking and waits to be very rare, almost impossible it
should correspond to the incidences we're seeing here.
J.
On Thu, 20 Oct 2005 23:27:02 -0700, John Bell
<jbellnewsposts@.hotmail.com> wrote:[vbcol=seagreen]
>Hi
>Have you looked at the locking events in SQL Profiler?
>John
>"jxstern" wrote:
|||Hi
The abnormal number of reads would not be more likely to be a poor query
plan, but if you have ruled out recompiles/parameter sniffing/missing or poor
stats/fragmented indexes, you need to look elsewhere and although it may not
explain the higher number of reads locking/blocking is one of the most common
causes of intermittently slow queries.
John
"JXStern" wrote:
> Have not done that, but I'm curious, could that possibly explain the
> situation?
> What lock-checking we've done in the way of investigating performance
> has shown locking and waits to be very rare, almost impossible it
> should correspond to the incidences we're seeing here.
> J.
>
> On Thu, 20 Oct 2005 23:27:02 -0700, John Bell
> <jbellnewsposts@.hotmail.com> wrote:
>
Anomalous heavy reads when heavily loaded
behaviors - small (or large) SPs that run fine on all data values when
run as single transactions, but which repeatably runs with 1000x more
reads in at least one place on a trace.
We are producing the load by reruning a trace via profiler, so we
expect the test box to be maxed out, and it is.
When I say "repeatedly", I have not verified that it's always the same
transactions that are running badly. Yes, of course it *may* be the
overall state of the database at that point in time that "is the
problem", but at least one of these SPs is a fairly simple piece of
business that shouldn't be that sensitive to ANY possible data
configurations.
It has been suggested that it is an optimizer FEATURE that it will
produce different plans when it sees the system is heavily loaded. I
was not aware of that. Can anybody tell me just what it has to see to
decide on this load? I heard it might be CPU, and synthesized some
bogus CPU loads to see if that would make my little SPs misbehave, but
so far that has not worked - the SP (the simplest one of about a dozen
I've seen doing this so far) insists on running in a normal 100 reads
instead of the 1.7m reads it displays occassionally in the trace.
Just looking for anyone who has even seen similar anomalies when
testing under load, I'm not even asking for further diagnostics,
though anything you have I'd like to hear.
Thanks.
JoshHi
Have you read the section on parameter sniffing in Ken Henderson's
"The Guru's Guide to SQL Server Architecture and Internals" ISBN
0-201-70047-6
also check out the post http://tinyurl.com/983uf
John
"jxstern" wrote:
> In a large-scale tuning exercise we are repeatedly seeing anomalous
> behaviors - small (or large) SPs that run fine on all data values when
> run as single transactions, but which repeatably runs with 1000x more
> reads in at least one place on a trace.
> We are producing the load by reruning a trace via profiler, so we
> expect the test box to be maxed out, and it is.
> When I say "repeatedly", I have not verified that it's always the same
> transactions that are running badly. Yes, of course it *may* be the
> overall state of the database at that point in time that "is the
> problem", but at least one of these SPs is a fairly simple piece of
> business that shouldn't be that sensitive to ANY possible data
> configurations.
> It has been suggested that it is an optimizer FEATURE that it will
> produce different plans when it sees the system is heavily loaded. I
> was not aware of that. Can anybody tell me just what it has to see to
> decide on this load? I heard it might be CPU, and synthesized some
> bogus CPU loads to see if that would make my little SPs misbehave, but
> so far that has not worked - the SP (the simplest one of about a dozen
> I've seen doing this so far) insists on running in a normal 100 reads
> instead of the 1.7m reads it displays occassionally in the trace.
> Just looking for anyone who has even seen similar anomalies when
> testing under load, I'm not even asking for further diagnostics,
> though anything you have I'd like to hear.
> Thanks.
> Josh
>|||On Thu, 20 Oct 2005 00:50:03 -0700, John Bell
<jbellnewsposts@.hotmail.com> wrote:
>Have you read the section on parameter sniffing in Ken Henderson's
>"The Guru's Guide to SQL Server Architecture and Internals" ISBN
>0-201-70047-6
>also check out the post http://tinyurl.com/983uf
I'm aware of sniffing, but I don't see how it can be that when the
same parameters run later give (very!) different runtimes.
(benchmark process does not show or validate results).
J.|||Hi
Have you looked at the locking events in SQL Profiler?
John
"jxstern" wrote:
> On Thu, 20 Oct 2005 00:50:03 -0700, John Bell
> <jbellnewsposts@.hotmail.com> wrote:
> I'm aware of sniffing, but I don't see how it can be that when the
> same parameters run later give (very!) different runtimes.
> (benchmark process does not show or validate results).
> J.
>|||Have not done that, but I'm curious, could that possibly explain the
situation?
What lock-checking we've done in the way of investigating performance
has shown locking and waits to be very rare, almost impossible it
should correspond to the incidences we're seeing here.
J.
On Thu, 20 Oct 2005 23:27:02 -0700, John Bell
<jbellnewsposts@.hotmail.com> wrote:[vbcol=seagreen]
>Hi
>Have you looked at the locking events in SQL Profiler?
>John
>"jxstern" wrote:
>|||Hi
The abnormal number of reads would not be more likely to be a poor query
plan, but if you have ruled out recompiles/parameter sniffing/missing or poo
r
stats/fragmented indexes, you need to look elsewhere and although it may not
explain the higher number of reads locking/blocking is one of the most commo
n
causes of intermittently slow queries.
John
"JXStern" wrote:
> Have not done that, but I'm curious, could that possibly explain the
> situation?
> What lock-checking we've done in the way of investigating performance
> has shown locking and waits to be very rare, almost impossible it
> should correspond to the incidences we're seeing here.
> J.
>
> On Thu, 20 Oct 2005 23:27:02 -0700, John Bell
> <jbellnewsposts@.hotmail.com> wrote:
>
Anomalous heavy reads when heavily loaded
behaviors - small (or large) SPs that run fine on all data values when
run as single transactions, but which repeatably runs with 1000x more
reads in at least one place on a trace.
We are producing the load by reruning a trace via profiler, so we
expect the test box to be maxed out, and it is.
When I say "repeatedly", I have not verified that it's always the same
transactions that are running badly. Yes, of course it *may* be the
overall state of the database at that point in time that "is the
problem", but at least one of these SPs is a fairly simple piece of
business that shouldn't be that sensitive to ANY possible data
configurations.
It has been suggested that it is an optimizer FEATURE that it will
produce different plans when it sees the system is heavily loaded. I
was not aware of that. Can anybody tell me just what it has to see to
decide on this load? I heard it might be CPU, and synthesized some
bogus CPU loads to see if that would make my little SPs misbehave, but
so far that has not worked - the SP (the simplest one of about a dozen
I've seen doing this so far) insists on running in a normal 100 reads
instead of the 1.7m reads it displays occassionally in the trace.
Just looking for anyone who has even seen similar anomalies when
testing under load, I'm not even asking for further diagnostics,
though anything you have I'd like to hear.
Thanks.
JoshHi
Have you read the section on parameter sniffing in Ken Henderson's
"The Guru's Guide to SQL Server Architecture and Internals" ISBN
0-201-70047-6
also check out the post http://tinyurl.com/983uf
John
"jxstern" wrote:
> In a large-scale tuning exercise we are repeatedly seeing anomalous
> behaviors - small (or large) SPs that run fine on all data values when
> run as single transactions, but which repeatably runs with 1000x more
> reads in at least one place on a trace.
> We are producing the load by reruning a trace via profiler, so we
> expect the test box to be maxed out, and it is.
> When I say "repeatedly", I have not verified that it's always the same
> transactions that are running badly. Yes, of course it *may* be the
> overall state of the database at that point in time that "is the
> problem", but at least one of these SPs is a fairly simple piece of
> business that shouldn't be that sensitive to ANY possible data
> configurations.
> It has been suggested that it is an optimizer FEATURE that it will
> produce different plans when it sees the system is heavily loaded. I
> was not aware of that. Can anybody tell me just what it has to see to
> decide on this load? I heard it might be CPU, and synthesized some
> bogus CPU loads to see if that would make my little SPs misbehave, but
> so far that has not worked - the SP (the simplest one of about a dozen
> I've seen doing this so far) insists on running in a normal 100 reads
> instead of the 1.7m reads it displays occassionally in the trace.
> Just looking for anyone who has even seen similar anomalies when
> testing under load, I'm not even asking for further diagnostics,
> though anything you have I'd like to hear.
> Thanks.
> Josh
>|||On Thu, 20 Oct 2005 00:50:03 -0700, John Bell
<jbellnewsposts@.hotmail.com> wrote:
>Have you read the section on parameter sniffing in Ken Henderson's
>"The Guru's Guide to SQL Server Architecture and Internals" ISBN
>0-201-70047-6
>also check out the post http://tinyurl.com/983uf
I'm aware of sniffing, but I don't see how it can be that when the
same parameters run later give (very!) different runtimes.
(benchmark process does not show or validate results).
J.|||Hi
Have you looked at the locking events in SQL Profiler?
John
"jxstern" wrote:
> On Thu, 20 Oct 2005 00:50:03 -0700, John Bell
> <jbellnewsposts@.hotmail.com> wrote:
> >Have you read the section on parameter sniffing in Ken Henderson's
> >"The Guru's Guide to SQL Server Architecture and Internals" ISBN
> >0-201-70047-6
> >also check out the post http://tinyurl.com/983uf
> I'm aware of sniffing, but I don't see how it can be that when the
> same parameters run later give (very!) different runtimes.
> (benchmark process does not show or validate results).
> J.
>|||Have not done that, but I'm curious, could that possibly explain the
situation?
What lock-checking we've done in the way of investigating performance
has shown locking and waits to be very rare, almost impossible it
should correspond to the incidences we're seeing here.
J.
On Thu, 20 Oct 2005 23:27:02 -0700, John Bell
<jbellnewsposts@.hotmail.com> wrote:
>Hi
>Have you looked at the locking events in SQL Profiler?
>John
>"jxstern" wrote:
>> On Thu, 20 Oct 2005 00:50:03 -0700, John Bell
>> <jbellnewsposts@.hotmail.com> wrote:
>> >Have you read the section on parameter sniffing in Ken Henderson's
>> >"The Guru's Guide to SQL Server Architecture and Internals" ISBN
>> >0-201-70047-6
>> >also check out the post http://tinyurl.com/983uf
>> I'm aware of sniffing, but I don't see how it can be that when the
>> same parameters run later give (very!) different runtimes.
>> (benchmark process does not show or validate results).
>> J.
>>|||Hi
The abnormal number of reads would not be more likely to be a poor query
plan, but if you have ruled out recompiles/parameter sniffing/missing or poor
stats/fragmented indexes, you need to look elsewhere and although it may not
explain the higher number of reads locking/blocking is one of the most common
causes of intermittently slow queries.
John
"JXStern" wrote:
> Have not done that, but I'm curious, could that possibly explain the
> situation?
> What lock-checking we've done in the way of investigating performance
> has shown locking and waits to be very rare, almost impossible it
> should correspond to the incidences we're seeing here.
> J.
>
> On Thu, 20 Oct 2005 23:27:02 -0700, John Bell
> <jbellnewsposts@.hotmail.com> wrote:
> >Hi
> >
> >Have you looked at the locking events in SQL Profiler?
> >
> >John
> >
> >"jxstern" wrote:
> >
> >> On Thu, 20 Oct 2005 00:50:03 -0700, John Bell
> >> <jbellnewsposts@.hotmail.com> wrote:
> >> >Have you read the section on parameter sniffing in Ken Henderson's
> >> >"The Guru's Guide to SQL Server Architecture and Internals" ISBN
> >> >0-201-70047-6
> >> >also check out the post http://tinyurl.com/983uf
> >>
> >> I'm aware of sniffing, but I don't see how it can be that when the
> >> same parameters run later give (very!) different runtimes.
> >>
> >> (benchmark process does not show or validate results).
> >>
> >> J.
> >>
> >>
>
Annualizing Measures
In the financial services world, a common requirement is to annualized a value. We've made several attempts at this. The challenge comes when using a standard Date hierarchy (Year, Quarter, Month, Date). We need to annualized a measure based on a running total of the last 6 months. We also need to take into account the early dates of the warehouse where X Months might not be available by multiplying the aggregate it by Y divided by the count of non-empty members.
This seems to work for this part of the problem:
Create Member CurrentCube.[Measures].[Rolling 6m Contribution]
AS Case
When COUNT(LASTPERIODS(6, [Date].[Month].CurrentMember), EXCLUDEEMPTY) + 1 < 6
Then Aggregate(LASTPERIODS ( 6, [Date].[Month].CurrentMember),[Measures].[Total Contribution])
* 6 /
(COUNT(LASTPERIODS(6, [Date].[Month].CurrentMember), EXCLUDEEMPTY) + 1)
Else
Aggregate(LASTPERIODS ( 6, [Date].[Month].CurrentMember),[Measures].[Total Contribution])
End,
Format_String = "Currency",
Non_Empty_Behavior = [Measures].[Total Contribution],
Visible = 1;
We've also tried this with the YTD function, but using YTD give a small sample size for the annual figure in the first months of each year.
Next, we need to be able to make the rolling sum work at all levels of the date hierarchy. This basically needs to act like LastNonEmpty aggregation on normal measures. We created a second measure:
Create Member CurrentCube.[Measures].[Rolling 6a Contribution]
AS
([Measures].[Rolling 6m Contribution],
ClosingPeriod([Date].[Calendar Year Hierarchy].[Month],
[Date].[Calendar Year Hierarchy].CurrentMember)),
Format_String = "Currency",
Non_Empty_Behavior = [Measures].[Total Contribution],
Visible = 1;
This seems to work for Year and Quarter except that the final Year has a Null value.
Does anyone have any suggestions on annualization in general or what we're doing wrong in the code above?
Im am not sure that I can solve all your business/MDX problems but it seems to me that you will need a sliding time member.
Here is a thread that I have participated in that perhaps can give you a clue: http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=996037&SiteID=1
Mosha have also a good post on his BLOG(http://www.sqljunkies.com/WebLog/mosha/archive/2006/10/25/time_calculations_parallelperiod.aspx)
HTH
Thomas Ivarsson
|||Thanks for the suggestion Thomas. Both posts as well as Danial Shroyer white paper have great info.
However, my problem here isn't so much getting the running total as making it work with a user hierarchy. In the first code sample above, I can get the the sliding total. I can't figure out how to have the year and quarter levels reflect the monthly amount like a LastNonEmpty measure. For example, in November or 2006, the 2006 level needs to be November's sliding total (times 2 to annualize). The second code sample almost works.
|||To clarify a bit, what I need is a running total that can be shown along side other measures and calculated measures in a report. The follwing MDX:
Create Member CurrentCube.[Measures].[Rolling 6m Contribution]
AS Case
When COUNT(LASTPERIODS(6, [Date].[Month].CurrentMember), EXCLUDEEMPTY) + 1 < 6
Then Aggregate(LASTPERIODS ( 6, [Date].[Month].CurrentMember),[Measures].[Total Contribution])
* 6 /
(COUNT(LASTPERIODS(6, [Date].[Month].CurrentMember), EXCLUDEEMPTY) + 1)
Else
Aggregate(LASTPERIODS ( 6, [Date].[Month].CurrentMember),[Measures].[Total Contribution])
End,
Format_String = "Currency",
Non_Empty_Behavior = [Measures].[Total Contribution],
Visible = 1;
Yields the following results in OWC:
The adustment for the first 5 months of data in the warehouse works and the running sum works for 2005, all quarters, and all months. The remaining problem is the missing total for 2006. Can anyone tell my why this doesn't work?
|||For anyone who's interested, the problem in the above example is that 2006 hasn't "closed" yet. ClosingPeriod returns the last child of Year, but since there's no fact data in the cube for December yet, the result is null.
It would be great if you could set aggregation on a calculated member (LastNonEmpty).
Annoying, cant insert into DB for some reason, even using a stored procedure.
First is the code for the insert
Sub AddCollector(Sender As Object, E As EventArgs)
Message.InnerHtml = ""If (Page.IsValid)
Dim ConnectionString As String = "server='(local)'; trusted_connection=true; database='MyCollection'"
Dim myConnection As New SqlConnection(ConnectionString)
Dim myCommand As SqlCommand
Dim InsertCmd As String = "insert into Collectors (CollectorID, Name, EmailAddress, Password, Information) values (@.CollectorID, @.Name, @.Email, @.Password, @.Information)"myCommand = New SqlCommand(InsertCmd, myConnection)
myCommand.Connection.Open()
myCommand.Parameters.Add(New SqlParameter("@.CollectorID", SqlDbType.NVarChar, 50))
myCommand.Parameters("@.CollectorID").Value = CollectorID.TextmyCommand.Parameters.Add(New SqlParameter("@.Name", SqlDbType.NVarChar, 50))
myCommand.Parameters("@.Name").Value = Name.TextmyCommand.Parameters.Add(New SqlParameter("@.Email", SqlDbType.NVarChar, 50))
myCommand.Parameters("@.Email").Value = EmailAddress.TextmyCommand.Parameters.Add(New SqlParameter("@.Password", SqlDbType.NVarChar, 50))
myCommand.Parameters("@.Password").Value = Password.TextmyCommand.Parameters.Add(New SqlParameter("@.Information", SqlDbType.NVarChar, 3000))
myCommand.Parameters("@.Information").Value = Information.TextTry
myCommand.ExecuteNonQuery()
Message.InnerHtml = "Record Added<br>"
Catch Exp As SQLException
If Exp.Number = 2627
Message.InnerHtml = "ERROR: A record already exists with the same primary key"
Else
Message.InnerHtml = "ERROR: Could not add record"
End If
Message.Style("color") = "red"
End TrymyCommand.Connection.Close()
End If
End Sub
No matter what I get a "Could not add record" message
Even substituting the insert command string with my stored procedure I would get the same thing
Stored Procedure:
CREATE Procedure CollectorAdd
(
@.Name nvarchar(50),
@.Email nvarchar(50),
@.Password nvarchar(50),
@.Information nvarchar(3000),
@.CustomerID int OUTPUT
)
ASINSERT Collectors
(
Name,
EMailAddress,
Password,
Information
)VALUES
(
@.Name,
@.Email,
@.Password,
@.Information
)
GO
Can anyone see any problems with this code? It looks good to me but I get the same message always.
ThanksWhy not print out the actual exception text (exp.ToString(), for instance)? Then the exception will tell you why.|||Wow, nice little trick. It helped me find the problem. I had an expected parameter in the SP that I was not supplying.
Thank You
annoying SQL Server error message
We have developped many applications, for a major customer, connected
to the same SQL Server database. Those apps have been programmed in VB6
and VB.NET. The SQL Server is located on a dual-processor server,
running Windows 2000 Server, with 3 gigs RAM. The apps are installed on
Windows 2000 and Windows XP/SP2 client.
And the Windows 2000 client PC's, there is no problem at all. But, on
the XPs, we have, this error message popup: "Unable to load SQL Server
OLEDB provider ressource DLL. The application cannot continue". Look
like it appears randomly.
The same application, when installed on a Win2000, don't have any
problem at all. Just the damn XP machines. We cannot change the OS of
these PC's (enterprise rules).
This error is a major problem because it stops the application until
you acknowledge the error. The app is used for realtime control of a
factory, so, you see the problem...
XP workstation uses MDAC 2.8 SP1 for Windows XP SP2
2000 uses MDAC 2.6 RTM
Apps has been compiled on a MDAC 2.8 SP1 Win XP SP2 computer
How can I fix this?
thanks a lot for your help, it's really appreciated.
1)Have you got the latest MDAC installation ?
2)Track down all instances of sqloledb.rll and sqloledb.dll . Make sure
they are the latest versions and match up
Jack Vamvas
___________________________________
Receive free SQL tips - www.ciquery.com/sqlserver.htm
"Dominic30" <gagned@.videotron.ca> wrote in message
news:1142610474.169168.15240@.i40g2000cwc.googlegro ups.com...
> Hi,
> We have developped many applications, for a major customer, connected
> to the same SQL Server database. Those apps have been programmed in VB6
> and VB.NET. The SQL Server is located on a dual-processor server,
> running Windows 2000 Server, with 3 gigs RAM. The apps are installed on
> Windows 2000 and Windows XP/SP2 client.
> And the Windows 2000 client PC's, there is no problem at all. But, on
> the XPs, we have, this error message popup: "Unable to load SQL Server
> OLEDB provider ressource DLL. The application cannot continue". Look
> like it appears randomly.
> The same application, when installed on a Win2000, don't have any
> problem at all. Just the damn XP machines. We cannot change the OS of
> these PC's (enterprise rules).
> This error is a major problem because it stops the application until
> you acknowledge the error. The app is used for realtime control of a
> factory, so, you see the problem...
> XP workstation uses MDAC 2.8 SP1 for Windows XP SP2
> 2000 uses MDAC 2.6 RTM
> Apps has been compiled on a MDAC 2.8 SP1 Win XP SP2 computer
> How can I fix this?
> thanks a lot for your help, it's really appreciated.
>
|||1) Yeah, I've got the latest MDAC installation. Anyway, I can't
installed another MDAC version on the XP machines, the install failed.
2) These files match up yes. I've checked all the station and it's
looks fine.
any more idea?
by the way, thanks for the reply
|||In what path is the SQLOLEDB.RLL file ?
You mention that all the apps are using the same sql server. Is this though
Linked Servers?
Jack Vamvas
___________________________________
Receive free SQL tips - www.ciquery.com/sqlserver.htm
"Dominic30" <gagned@.videotron.ca> wrote in message
news:1142613417.547606.39710@.z34g2000cwc.googlegro ups.com...
> 1) Yeah, I've got the latest MDAC installation. Anyway, I can't
> installed another MDAC version on the XP machines, the install failed.
> 2) These files match up yes. I've checked all the station and it's
> looks fine.
> any more idea?
> by the way, thanks for the reply
>
annoying SQL Server error message
We have developped many applications, for a major customer, connected
to the same SQL Server database. Those apps have been programmed in VB6
and VB.NET. The SQL Server is located on a dual-processor server,
running Windows 2000 Server, with 3 gigs RAM. The apps are installed on
Windows 2000 and Windows XP/SP2 client.
And the Windows 2000 client PC's, there is no problem at all. But, on
the XPs, we have, this error message popup: "Unable to load SQL Server
OLEDB provider ressource DLL. The application cannot continue". Look
like it appears randomly.
The same application, when installed on a Win2000, don't have any
problem at all. Just the damn XP machines. We cannot change the OS of
these PC's (enterprise rules).
This error is a major problem because it stops the application until
you acknowledge the error. The app is used for realtime control of a
factory, so, you see the problem...
XP workstation uses MDAC 2.8 SP1 for Windows XP SP2
2000 uses MDAC 2.6 RTM
Apps has been compiled on a MDAC 2.8 SP1 Win XP SP2 computer
How can I fix this'
thanks a lot for your help, it's really appreciated.1)Have you got the latest MDAC installation ?
2)Track down all instances of sqloledb.rll and sqloledb.dll . Make sure
they are the latest versions and match up
Jack Vamvas
___________________________________
Receive free SQL tips - www.ciquery.com/sqlserver.htm
"Dominic30" <gagned@.videotron.ca> wrote in message
news:1142610474.169168.15240@.i40g2000cwc.googlegroups.com...
> Hi,
> We have developped many applications, for a major customer, connected
> to the same SQL Server database. Those apps have been programmed in VB6
> and VB.NET. The SQL Server is located on a dual-processor server,
> running Windows 2000 Server, with 3 gigs RAM. The apps are installed on
> Windows 2000 and Windows XP/SP2 client.
> And the Windows 2000 client PC's, there is no problem at all. But, on
> the XPs, we have, this error message popup: "Unable to load SQL Server
> OLEDB provider ressource DLL. The application cannot continue". Look
> like it appears randomly.
> The same application, when installed on a Win2000, don't have any
> problem at all. Just the damn XP machines. We cannot change the OS of
> these PC's (enterprise rules).
> This error is a major problem because it stops the application until
> you acknowledge the error. The app is used for realtime control of a
> factory, so, you see the problem...
> XP workstation uses MDAC 2.8 SP1 for Windows XP SP2
> 2000 uses MDAC 2.6 RTM
> Apps has been compiled on a MDAC 2.8 SP1 Win XP SP2 computer
> How can I fix this'
> thanks a lot for your help, it's really appreciated.
>|||1) Yeah, I've got the latest MDAC installation. Anyway, I can't
installed another MDAC version on the XP machines, the install failed.
2) These files match up yes. I've checked all the station and it's
looks fine.
any more idea?
by the way, thanks for the reply|||In what path is the SQLOLEDB.RLL file ?
You mention that all the apps are using the same sql server. Is this though
Linked Servers?
Jack Vamvas
___________________________________
Receive free SQL tips - www.ciquery.com/sqlserver.htm
"Dominic30" <gagned@.videotron.ca> wrote in message
news:1142613417.547606.39710@.z34g2000cwc.googlegroups.com...
> 1) Yeah, I've got the latest MDAC installation. Anyway, I can't
> installed another MDAC version on the XP machines, the install failed.
> 2) These files match up yes. I've checked all the station and it's
> looks fine.
> any more idea?
> by the way, thanks for the reply
>
annoying SQL Server error message
We have developped many applications, for a major customer, connected
to the same SQL Server database. Those apps have been programmed in VB6
and VB.NET. The SQL Server is located on a dual-processor server,
running Windows 2000 Server, with 3 gigs RAM. The apps are installed on
Windows 2000 and Windows XP/SP2 client.
And the Windows 2000 client PC's, there is no problem at all. But, on
the XPs, we have, this error message popup: "Unable to load SQL Server
OLEDB provider ressource DLL. The application cannot continue". Look
like it appears randomly.
The same application, when installed on a Win2000, don't have any
problem at all. Just the damn XP machines. We cannot change the OS of
these PC's (enterprise rules).
This error is a major problem because it stops the application until
you acknowledge the error. The app is used for realtime control of a
factory, so, you see the problem...
XP workstation uses MDAC 2.8 SP1 for Windows XP SP2
2000 uses MDAC 2.6 RTM
Apps has been compiled on a MDAC 2.8 SP1 Win XP SP2 computer
How can I fix this'
thanks a lot for your help, it's really appreciated.1)Have you got the latest MDAC installation ?
2)Track down all instances of sqloledb.rll and sqloledb.dll . Make sure
they are the latest versions and match up
Jack Vamvas
___________________________________
Receive free SQL tips - www.ciquery.com/sqlserver.htm
"Dominic30" <gagned@.videotron.ca> wrote in message
news:1142610474.169168.15240@.i40g2000cwc.googlegroups.com...
> Hi,
> We have developped many applications, for a major customer, connected
> to the same SQL Server database. Those apps have been programmed in VB6
> and VB.NET. The SQL Server is located on a dual-processor server,
> running Windows 2000 Server, with 3 gigs RAM. The apps are installed on
> Windows 2000 and Windows XP/SP2 client.
> And the Windows 2000 client PC's, there is no problem at all. But, on
> the XPs, we have, this error message popup: "Unable to load SQL Server
> OLEDB provider ressource DLL. The application cannot continue". Look
> like it appears randomly.
> The same application, when installed on a Win2000, don't have any
> problem at all. Just the damn XP machines. We cannot change the OS of
> these PC's (enterprise rules).
> This error is a major problem because it stops the application until
> you acknowledge the error. The app is used for realtime control of a
> factory, so, you see the problem...
> XP workstation uses MDAC 2.8 SP1 for Windows XP SP2
> 2000 uses MDAC 2.6 RTM
> Apps has been compiled on a MDAC 2.8 SP1 Win XP SP2 computer
> How can I fix this'
> thanks a lot for your help, it's really appreciated.
>|||1) Yeah, I've got the latest MDAC installation. Anyway, I can't
installed another MDAC version on the XP machines, the install failed.
2) These files match up yes. I've checked all the station and it's
looks fine.
any more idea?
by the way, thanks for the reply|||In what path is the SQLOLEDB.RLL file ?
You mention that all the apps are using the same sql server. Is this though
Linked Servers?
Jack Vamvas
___________________________________
Receive free SQL tips - www.ciquery.com/sqlserver.htm
"Dominic30" <gagned@.videotron.ca> wrote in message
news:1142613417.547606.39710@.z34g2000cwc.googlegroups.com...
> 1) Yeah, I've got the latest MDAC installation. Anyway, I can't
> installed another MDAC version on the XP machines, the install failed.
> 2) These files match up yes. I've checked all the station and it's
> looks fine.
> any more idea?
> by the way, thanks for the reply
>
annoying SQL Error msg - "INTRA-QUERY"
Pbbly most of you know this:
Intra-query parallelism caused your server command (process ID #57) to deadlock. Rerun the query without intra-query parallelism by using the query hint option (maxdop 1).
I've seen MS KnowledgeBase for that (http://support.microsoft.com/default.aspx?scid=kb;EN-US;837983)
But I have some probs with it:
1. I don't have permissions for this kind of queries
"USE master
EXEC sp_configure 'show advanced option', '1'
RECONFIGURE WITH OVERRIDE
GO"
2. I don't know what is an "intra-query parallelism"...
I'm attaching the query I use. The query runs OK for some parameters but gets this error msg on other parameters.
I'm using a single CPU on 2003 STD Edition.
Query:
create table #mytemp_table
(client_id int not null, commission_value int null)
insert into #mytemp_table (client_id, commission_value)
select client_id, sum(transfer_value) from users_transfers where transfer_type in (1,3) and is_paid = 1 group by client_id
select
u.name [Vendor],
u.email,
sum(cost) as Payment,
cmpd.vendor_id,
ua.name [Agent],
vd.join_date,
commission_value [paid],
allow_sign
from
reports ra
left outer join codes ctd on ra.affiliation_code_show = ctd.affiliation_code_show
left outer join traffic cmp on cmp.campaign_id = ctd.campaign_id
left outer join traffic_details cmpd on cmp.campaign_id = cmpd.campaign_id
left outer join userssb u on cmpd.vendor_id = u.client_id
left outer join userssb ua on cmpd.agent_id = ua.client_id
left outer join users_details vd on u.client_id = vd.client_id
left outer join #mytemp_table com_paid on com_paid.client_id = u.client_id
left outer join allow_db asd on asd.client_id = u.client_id
where
[conditions]
group by
[all the group fields]
having sum(cost) > 2999
order by u.name
drop table #mytemp_table1. running sp_configure affects the entire server. This is really not what you want to do anyway.
This is how you use Maxdop statement hint.
select
u.name [Vendor],
u.email,
sum(cost) as Payment,
cmpd.vendor_id,
ua.name [Agent],
vd.join_date,
commission_value [paid],
allow_sign
from
reports ra
left outer join codes ctd on ra.affiliation_code_show = ctd.affiliation_code_show
left outer join traffic cmp on cmp.campaign_id = ctd.campaign_id
left outer join traffic_details cmpd on cmp.campaign_id = cmpd.campaign_id
left outer join userssb u on cmpd.vendor_id = u.client_id
left outer join userssb ua on cmpd.agent_id = ua.client_id
left outer join users_details vd on u.client_id = vd.client_id
left outer join #mytemp_table com_paid on com_paid.client_id = u.client_id
left outer join allow_db asd on asd.client_id = u.client_id
where
[conditions]
group by
[all the group fields]
having sum(cost) > 2999
order by u.name
OPTION(MAXDOP 1)
--OPTION (MERGE JOIN, LOOP JOIN)
2. you probably have hyperthreading (i.e. 1 physical, 2 logical). Intra-query para means the statement is getting splitted up and executed in parallel. Each logical cpu takes a part of the query and perform the work.
Annoying scrolling in SSIS
> fit onto the control-flow surface at the same time.
> When I open the package though, why-oh-why is there nothing on the
> screen therefore forcing me to scroll all the way over to the left and
> all the way up to the top to see my 2 tasks (at which point the scroll
> bars disappear). >
> Oh and I close the package down, open it straight back up again and the
> same things happens. >
> Very annoying. Anyone concur? >
> -Jamie >
>I have a package with 2 and only 2 tasks in it - which obviously easily fit onto the control-flow surface at the same time.
When I open the package though, why-oh-why is there nothing on the screen therefore forcing me to scroll all the way over to the left and all the way up to the top to see my 2 tasks (at which point the scroll bars disappear).
Oh and I close the package down, open it straight back up again and the same things happens.
OK its nothing major but its very annoying. Anyone concur?
-Jamie
Annoying problem with SQL Server 2000 Query Analyzer
different file in the same query window, QA hangs on "Loading SQL Query...".
Does anybody have any ideas on why this is happening? I already removed and
re-installed SQL Server (client tools only) and applied SP3.
Thanks
Dan
Have a look at
http://support.microsoft.com/default.aspx?kbid=830767
HTH
Jasper Smith (SQL Server MVP)
I support PASS - the definitive, global
community for SQL Server professionals -
http://www.sqlpass.org
"Daniel Reber" <nospam@.nospam.com> wrote in message
news:eFtkS1SKEHA.2144@.TK2MSFTNGP10.phx.gbl...
> When I open a saved sql file it opens fine, but when I try to load a
> different file in the same query window, QA hangs on "Loading SQL
Query...".
> Does anybody have any ideas on why this is happening? I already removed
and
> re-installed SQL Server (client tools only) and applied SP3.
> Thanks
> Dan
>
|||Thanks for the link...I am on hold with MS right now to get the hotfix.
Terrible music...
Dan Reber
"Jasper Smith" <jasper_smith9@.hotmail.com> wrote in message
news:O7c6naTKEHA.1192@.TK2MSFTNGP11.phx.gbl...
> Have a look at
> http://support.microsoft.com/default.aspx?kbid=830767
> --
> HTH
> Jasper Smith (SQL Server MVP)
> I support PASS - the definitive, global
> community for SQL Server professionals -
> http://www.sqlpass.org
>
> "Daniel Reber" <nospam@.nospam.com> wrote in message
> news:eFtkS1SKEHA.2144@.TK2MSFTNGP10.phx.gbl...
> Query...".
> and
>
|||Hi Daniel,
It seems you request the hotfix mentioned in the article:
830767 FIX: SQL Query Analyzer May Stop Responding When You Close a Query
http://support.microsoft.com/?id=830767
Please contact Microsoft Product Support Services to obtain the hotfix. For
a complete list of Microsoft Product Support Services phone numbers and
information about support costs, visit the following Microsoft Web site:
http://support.microsoft.com/default.aspx?scid=fh;[LN];CNTACTMS
Regards,
Michael Shao
Microsoft Online Partner Support
Get Secure! - www.microsoft.com/security
This posting is provided "as is" with no warranties and confers no rights.
|||The hotfix corrected my issue, thanks.
Dan Reber
""Yuan Shao"" <v-yshao@.online.microsoft.com> wrote in message
news:jWUPnlUKEHA.2360@.cpmsftngxa10.phx.gbl...
> Hi Daniel,
> It seems you request the hotfix mentioned in the article:
> 830767 FIX: SQL Query Analyzer May Stop Responding When You Close a Query
> http://support.microsoft.com/?id=830767
> Please contact Microsoft Product Support Services to obtain the hotfix.
For
> a complete list of Microsoft Product Support Services phone numbers and
> information about support costs, visit the following Microsoft Web site:
> http://support.microsoft.com/default.aspx?scid=fh;[LN];CNTACTMS
> Regards,
> Michael Shao
> Microsoft Online Partner Support
> Get Secure! - www.microsoft.com/security
> This posting is provided "as is" with no warranties and confers no rights.
>
|||What number do you call? I'm not spending 99 bucks for this hotfix!
BigMac
"Daniel Reber" <nospam@.nospam.com> wrote in message
news:OiS5GsTKEHA.1192@.TK2MSFTNGP11.phx.gbl...[vbcol=seagreen]
> Thanks for the link...I am on hold with MS right now to get the hotfix.
> Terrible music...
> Dan Reber
> "Jasper Smith" <jasper_smith9@.hotmail.com> wrote in message
> news:O7c6naTKEHA.1192@.TK2MSFTNGP11.phx.gbl...
removed
>
|||How much did it cost, Daniel?
BigMac
"Daniel Reber" <nospam@.nospam.com> wrote in message
news:%23wkc66UKEHA.3472@.TK2MSFTNGP09.phx.gbl...[vbcol=seagreen]
> The hotfix corrected my issue, thanks.
> Dan Reber
> ""Yuan Shao"" <v-yshao@.online.microsoft.com> wrote in message
> news:jWUPnlUKEHA.2360@.cpmsftngxa10.phx.gbl...
Query[vbcol=seagreen]
> For
rights.
>
|||Why is MS not providing easier access to this fix? Can people share hotfixes
to avoid customer support? (I'm legit, just hate dealing with them).
"Daniel Reber" wrote:
> The hotfix corrected my issue, thanks.
> Dan Reber
> ""Yuan Shao"" <v-yshao@.online.microsoft.com> wrote in message
> news:jWUPnlUKEHA.2360@.cpmsftngxa10.phx.gbl...
> For
>
>
|||In general, we make you call in and talk to a support engineer because we
want to track who is using which hotfix. These things aren't tested to the
same degree as service packs and if we run into a serious problem with one
of them, it's nice to be able to figure out who we need to inform. In
theory. :-) I say that because I'm not on the support team and I'm not sure
how stringently that is followed. We also want to make sure you're actually
running into the specific problem that the hotfix corrects. Again, because
they're not tested as thoroughly and we don't advise running them without
need.
Once the support engineer has confirmed that you're hitting a bug fixed by
the hotfix, they'll refund your money.
Sincerely,
Stephen Dybing
This posting is provided "AS IS" with no warranties, and confers no rights.
"Brian T" <Brian T@.discussions.microsoft.com> wrote in message
news:E70AD0EA-1241-41E1-A951-2206780E15B4@.microsoft.com...[vbcol=seagreen]
> Why is MS not providing easier access to this fix? Can people share
> hotfixes
> to avoid customer support? (I'm legit, just hate dealing with them).
> "Daniel Reber" wrote:
Annoying omission in SQL Server Import and Export Wizard
before the commit - there is a listing of the SOURCE SERVER NAME and the
DESTINATION SERVER NAME. For some unknown reason, this has been removed.
It is the best double check I have to make sure that I am NOT moving test
data to production but actually moving live data to a test server. NOT
having this check is truly annoying! It would be a vast improvement if this
wizard [behaved as it did in SQL Server 2000]!
Regards,
Jamie
Good suggestion. You can submit your product feedback directly to Microsoft
using via Connect (http://connect.microsoft.com/SQLServer). You and others
can vote as well.
Hope this helps.
Dan Guzman
SQL Server MVP
"thejamie" <thejamie@.discussions.microsoft.com> wrote in message
news:5025EC4F-9139-41FE-B715-15B73DED67D8@.microsoft.com...
> In SQL Server 2000 - using the IMP-EXP WIZ had => on the very last screen
> before the commit - there is a listing of the SOURCE SERVER NAME and the
> DESTINATION SERVER NAME. For some unknown reason, this has been removed.
> It is the best double check I have to make sure that I am NOT moving test
> data to production but actually moving live data to a test server. NOT
> having this check is truly annoying! It would be a vast improvement if
> this
> wizard [behaved as it did in SQL Server 2000]!
> --
> Regards,
> Jamie
|||Complete.
Regards,
Jamie
"Dan Guzman" wrote:
> Good suggestion. You can submit your product feedback directly to Microsoft
> using via Connect (http://connect.microsoft.com/SQLServer). You and others
> can vote as well.
>
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "thejamie" <thejamie@.discussions.microsoft.com> wrote in message
> news:5025EC4F-9139-41FE-B715-15B73DED67D8@.microsoft.com...
>
|||Don't forget to vote on your own submission!
Hope this helps.
Dan Guzman
SQL Server MVP
"thejamie" <thejamie@.discussions.microsoft.com> wrote in message
news:7E6A15C5-ADC8-4959-AC94-415821F89B2A@.microsoft.com...[vbcol=seagreen]
> Complete.
> --
> Regards,
> Jamie
>
> "Dan Guzman" wrote:
Annoying omission in SQL Server Import and Export Wizard
before the commit - there is a listing of the SOURCE SERVER NAME and the
DESTINATION SERVER NAME. For some unknown reason, this has been removed.
It is the best double check I have to make sure that I am NOT moving test
data to production but actually moving live data to a test server. NOT
having this check is truly annoying! It would be a vast improvement if this
wizard [behaved as it did in SQL Server 2000]!
--
Regards,
JamieGood suggestion. You can submit your product feedback directly to Microsoft
using via Connect (http://connect.microsoft.com/SQLServer). You and others
can vote as well.
Hope this helps.
Dan Guzman
SQL Server MVP
"thejamie" <thejamie@.discussions.microsoft.com> wrote in message
news:5025EC4F-9139-41FE-B715-15B73DED67D8@.microsoft.com...
> In SQL Server 2000 - using the IMP-EXP WIZ had => on the very last screen
> before the commit - there is a listing of the SOURCE SERVER NAME and the
> DESTINATION SERVER NAME. For some unknown reason, this has been removed.
> It is the best double check I have to make sure that I am NOT moving test
> data to production but actually moving live data to a test server. NOT
> having this check is truly annoying! It would be a vast improvement if
> this
> wizard [behaved as it did in SQL Server 2000]!
> --
> Regards,
> Jamie|||Complete.
--
Regards,
Jamie
"Dan Guzman" wrote:
> Good suggestion. You can submit your product feedback directly to Microso
ft
> using via Connect (http://connect.microsoft.com/SQLServer). You and other
s
> can vote as well.
>
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "thejamie" <thejamie@.discussions.microsoft.com> wrote in message
> news:5025EC4F-9139-41FE-B715-15B73DED67D8@.microsoft.com...
>|||Don't forget to vote on your own submission!
Hope this helps.
Dan Guzman
SQL Server MVP
"thejamie" <thejamie@.discussions.microsoft.com> wrote in message
news:7E6A15C5-ADC8-4959-AC94-415821F89B2A@.microsoft.com...[vbcol=seagreen]
> Complete.
> --
> Regards,
> Jamie
>
> "Dan Guzman" wrote:
>
Annoying issue with PDF rendering
laying out objects such that PDF doesn't throw blank pages between or pages
or on the first page.
My page is standard A4 which in landscape means it is 29.7cm wide - and this
is what I have set the report width to. I have margins of 1cm on the right
and 1cm on the left, the top and bottom margins are both 1.5cm. Allowing
for this, the report body and footer (no header) are 27.5cm wide.
Body contains two tables. One table is stand alone the other Table is
imbedded in a List grouped on user Id - there are no pagebreak checkboxes
checked.
In the footer I have a few textboxes detailing various things such as print
date and time, page number etc. The objects in the Footer and Body are
indented slightly and nothing overlaps.
Why am I getting these blank pages spilled in to Acrobat?
thanks
MattMatt
I would be really interested to know the answer to this as I am
experiencing the exact same problem. Looking back at past responses,
it looks like this is something that cannot be solved until the next
version. To overcome the problem, I am having to export to excel, copy
to word and print to pdf. Please if anyone has a better solution, lets
us know...
Lisa
"Matt" <NoSpam:Matthew.Moran@.Computercorp.com.au> wrote in message news:<esfCduUaEHA.1644@.tk2msftngp13.phx.gbl>...
> Are there some guidlines on setting the page and body sizes as well as
> laying out objects such that PDF doesn't throw blank pages between or pages
> or on the first page.
> My page is standard A4 which in landscape means it is 29.7cm wide - and this
> is what I have set the report width to. I have margins of 1cm on the right
> and 1cm on the left, the top and bottom margins are both 1.5cm. Allowing
> for this, the report body and footer (no header) are 27.5cm wide.
> Body contains two tables. One table is stand alone the other Table is
> imbedded in a List grouped on user Id - there are no pagebreak checkboxes
> checked.
> In the footer I have a few textboxes detailing various things such as print
> date and time, page number etc. The objects in the Footer and Body are
> indented slightly and nothing overlaps.
> Why am I getting these blank pages spilled in to Acrobat?
> thanks
> Matt|||I believe I ran into this on a report I had to export to pdf and if I recall
correctly, I was able to resolve it by playing around with the 'Size'
properties of the Body and the 'PageSize' properties of the Report.
You see the Body properties by clicking on an empty area of your report in
Report Designer, the white part with the grid lines or dots on it.
You see the Report properties by clicking on the thin beige area in Report
Designer just to the right or left of the Body of the report I just
described above.
I found that by playing around with these #'s I was able to control the pdf
rendering a bit better. I think one set of #'s had to be bigger than the
other to stop the blank page thing from happening. And please don't ask
which way it goes 'cause I don't remember :-) And I don't have that project
open right now and don't want to wait 10 minutes for it to open. Our
solution is getting rather large and seems to take forever to open.
Hope that helps.
Dale
"Matt" <NoSpam:Matthew.Moran@.Computercorp.com.au> wrote in message
news:esfCduUaEHA.1644@.tk2msftngp13.phx.gbl...
> Are there some guidlines on setting the page and body sizes as well as
> laying out objects such that PDF doesn't throw blank pages between or
pages
> or on the first page.
> My page is standard A4 which in landscape means it is 29.7cm wide - and
this
> is what I have set the report width to. I have margins of 1cm on the
right
> and 1cm on the left, the top and bottom margins are both 1.5cm. Allowing
> for this, the report body and footer (no header) are 27.5cm wide.
> Body contains two tables. One table is stand alone the other Table is
> imbedded in a List grouped on user Id - there are no pagebreak checkboxes
> checked.
> In the footer I have a few textboxes detailing various things such as
> date and time, page number etc. The objects in the Footer and Body are
> indented slightly and nothing overlaps.
> Why am I getting these blank pages spilled in to Acrobat?
> thanks
> Matt
>|||Hi Lisa,
You may have found as Dale suggests, that by playing with the formatting,
you can get it to export correctly to PDF.
Probably like you Lisa I was after something a bit more concrete to take the
guesswork out of it, but thanks to Dale for responding.
For what it's worth, this is where I'm at.
1) The page width needs to be either 21cm or 29.7cm for portrait and
landscape respectively. I am really only using Landscape, so I'm using
measurements relative to that.
2) The parts of the report, Body, Header and Footer cannot be wider than the
page width less margins. I typically have 1cm margins so my B, H and F need
to be in theory no wider than 27.7 - in practice, I have found that I can't
get over 27cm - although I've done quite a bit of playing, my testing is by
no means exhaustive.
3) Make sure objects don't overlap. In Crystal, you could pop things on top
of each other, in RS this is not the case.
The above ones are pretty obvious so you are probably on top of them
4) This is the grey area. I have two tables. One stand alone, I can set to
the width of the report body. A second table resides in a list. Even
though I'd reduced the width of this table to about 20cm it was still
spilling over to a new page. I found setting Can Grow property to True for
some of the longer fields resolved this. Even so, I cannot get the table
any wider than 25.5cm. The table is sitting within a List, which is 26.5cm
wide an has no padding.
Perhaps we can refine this as we come across more ideas
regards
Matt
"Lisa" <budgenlj@.willis.com> wrote in message
news:9abd1432.0407140649.3a7717f8@.posting.google.com...
> Matt
> I would be really interested to know the answer to this as I am
> experiencing the exact same problem. Looking back at past responses,
> it looks like this is something that cannot be solved until the next
> version. To overcome the problem, I am having to export to excel, copy
> to word and print to pdf. Please if anyone has a better solution, lets
> us know...
> Lisa
>
> "Matt" <NoSpam:Matthew.Moran@.Computercorp.com.au> wrote in message
news:<esfCduUaEHA.1644@.tk2msftngp13.phx.gbl>...
> > Are there some guidlines on setting the page and body sizes as well as
> > laying out objects such that PDF doesn't throw blank pages between or
pages
> > or on the first page.
> >
> > My page is standard A4 which in landscape means it is 29.7cm wide - and
this
> > is what I have set the report width to. I have margins of 1cm on the
right
> > and 1cm on the left, the top and bottom margins are both 1.5cm.
Allowing
> > for this, the report body and footer (no header) are 27.5cm wide.
> >
> > Body contains two tables. One table is stand alone the other Table is
> > imbedded in a List grouped on user Id - there are no pagebreak
checkboxes
> > checked.
> >
> > In the footer I have a few textboxes detailing various things such as
> > date and time, page number etc. The objects in the Footer and Body are
> > indented slightly and nothing overlaps.
> >
> > Why am I getting these blank pages spilled in to Acrobat?
> >
> > thanks
> >
> > Matt|||Dale/Matt
Thank you for your responses. I have managed to get this to work now
and pdf correcly by altering the page widths. Thank you both so much.
Lisa
"Dale" <dale@.NoSpamPlease.ca> wrote in message news:<ewSzN1daEHA.752@.TK2MSFTNGP09.phx.gbl>...
> I believe I ran into this on a report I had to export to pdf and if I recall
> correctly, I was able to resolve it by playing around with the 'Size'
> properties of the Body and the 'PageSize' properties of the Report.
> You see the Body properties by clicking on an empty area of your report in
> Report Designer, the white part with the grid lines or dots on it.
> You see the Report properties by clicking on the thin beige area in Report
> Designer just to the right or left of the Body of the report I just
> described above.
> I found that by playing around with these #'s I was able to control the pdf
> rendering a bit better. I think one set of #'s had to be bigger than the
> other to stop the blank page thing from happening. And please don't ask
> which way it goes 'cause I don't remember :-) And I don't have that project
> open right now and don't want to wait 10 minutes for it to open. Our
> solution is getting rather large and seems to take forever to open.
> Hope that helps.
> Dale
> "Matt" <NoSpam:Matthew.Moran@.Computercorp.com.au> wrote in message
> news:esfCduUaEHA.1644@.tk2msftngp13.phx.gbl...
> > Are there some guidlines on setting the page and body sizes as well as
> > laying out objects such that PDF doesn't throw blank pages between or
> pages
> > or on the first page.
> >
> > My page is standard A4 which in landscape means it is 29.7cm wide - and
> this
> > is what I have set the report width to. I have margins of 1cm on the
> right
> > and 1cm on the left, the top and bottom margins are both 1.5cm. Allowing
> > for this, the report body and footer (no header) are 27.5cm wide.
> >
> > Body contains two tables. One table is stand alone the other Table is
> > imbedded in a List grouped on user Id - there are no pagebreak checkboxes
> > checked.
> >
> > In the footer I have a few textboxes detailing various things such as
> > date and time, page number etc. The objects in the Footer and Body are
> > indented slightly and nothing overlaps.
> >
> > Why am I getting these blank pages spilled in to Acrobat?
> >
> > thanks
> >
> > Matt
> >
> >
Annoying Inner Join Problem
Greetings SSIS friends,
I have the following problem in SSIS and it's driving me nuts!!!!!
My situation is as follows :
Data Source 1 & Data Source 2are joined using a merge join. This bit works fine.
The output of the above join is then joined to a third data source but this time, I only get 63 rows coming through instead of 77097 even though the join key in the second merge join component is the same as the first one!!!
I thought I was going mad so I decided to see if the same problem occurs if I was to do this with T-SQL. I created 3 temporary tables for each of my data sources.
I did an inner join between tmpTable_Source1 and tmpTable_Source2, I then stored the result in tempTable4
The final inner join was performed between tempTable4 and tempTable3 and the result produced 77097 and not 63 rows.
What the hell is SSIS playing at?! The merge keys I used in T-SQL is the same one I used in my SSIS package!!!!
Are all the inputs of your merge joins actually sorted? The inputs require to be sorted; otherwise you will get unexpected results.|||
Hi Rafael,
Yes my inputs are sorted but anyway, I decided to do away with one of my data sources and combined it as part of the SQL command in my second data source.
Now I have another problem!!!
In my UNION ALL component I get an error for one of my Input fields. It says that the metadata for my input column does not match with the metadata for the associated output column. The trouble is, this was working just fine before and I haven't touched it!!!!
What would cause this kind of problem?!!
Your help would be much appreciated.
|||
Sorry Rafael,
I've solved this little problem. Looks like the data type for one of my input columns was a unicode string.
I will look in to your idea with the other saved package that I've got. Still unsure aboutt the ordering business but will try it and let you know.
Thanks for your help.
|||Union all transforms are not very friendly when refreshing metadata. I have found faster to delete the faulty column mapping inside of the lookup and added it back. To delete it select the whole row representing the column with the problem and select delete from the right click menu. Then add it back using the drop down lists. Alternative, you could delete the whole Union all and place it back.
Annoying feature - report service rewrite query automatically.
format of query is changed too. How to disable this feature?
Thanks really a lot.Use the generic query designer (two pane) instead of the graphical query
designer. The button to switch to it is one of the buttons to the right of
the ...
Bruce Loehle-Conger
MVP SQL Server Reporting Services
"qye2020" <qye2020@.yahoo.com> wrote in message
news:uz0EglpJGHA.2088@.TK2MSFTNGP11.phx.gbl...
> This feature is very annoying, report service 2k rewrites my query, even
> the format of query is changed too. How to disable this feature?
> Thanks really a lot.
>|||Wonderful, problem is gone with your kind answer.
Thanks a lot.
"Bruce L-C [MVP]" <bruce_lcNOSPAM@.hotmail.com> wrote in message
news:ewOcV9pJGHA.3176@.TK2MSFTNGP12.phx.gbl...
> Use the generic query designer (two pane) instead of the graphical query
> designer. The button to switch to it is one of the buttons to the right of
> the ...
>
> --
> Bruce Loehle-Conger
> MVP SQL Server Reporting Services
> "qye2020" <qye2020@.yahoo.com> wrote in message
> news:uz0EglpJGHA.2088@.TK2MSFTNGP11.phx.gbl...
>> This feature is very annoying, report service 2k rewrites my query, even
>> the format of query is changed too. How to disable this feature?
>> Thanks really a lot.
>
Annoying Error Message
Hello,
I recently migrated from 2000 to 2005 database. On the new server, I kept getting this type of error message when there is some sort of join involved and they are not complex join either.
Error - "A severe error occurred on the current command. The results, if any, should be discarded."
ie
select A.Cookie1, A.Cookie2
into BookersLookers_Cookie
from BookersLookers_DataSet A
LEFT JOIN ActionSetup B on A.ActionID = B.ActionID
group by A.Cookie1, A.Cookie2
Is there something in the code where I could prevent this error from happening?
Any insight is appreciated,
-Lawrence
Moving to engine forum|||Hi SQLChamp,
What is the build of your SQL server?
What is the exact error message you are getting?
Any errors in theerror log?
Jag
|||Hello Jag Sandhu,
What is the build of your SQL server? 9.00.3042 (X64)
What is the exact error message you are getting? "A severe error occurred on the current command. The results, if any, should be discarded."
Any errors in theerror log? no error log set up
-Lawrence
|||You encounter an old bug which should have been fixed if you're running sp2.
http://support.microsoft.com/default.aspx/kb/910416
The error log we're interested in is the sql log. You can get it by executing the following sproc.
exec sp_readerrorlog
Please post the result of "select @.@.version" while you're at it.
|||I posted select @.@.version earlier - 9.00.3042 (X64)
I would find out any logs from sp_readerrorlog
If all else fails, I guess this is a big bug, and hopefully SP2 will fix it.
-Lawrence
|||Hi
From your buildnumber it seems like you are running SP2 already.
Let us know what you get from error logs.
Jag
|||Hello,
I received the following error message from sp_readerrorlop
2007-04-30 12:51:18.620 Server Error: 17310, Severity: 20, State: 1.
2007-04-30 12:51:18.620 Server A user request from the session with SPID 57 generated a fatal exception. SQL Server is terminating this session. Contact Product Support Services with the dump produced in the log directory.
|||You should probably do what the message says.
Contact Product Support Services with the dump produced in the log directory.
Or possibly post the .mdmp file on Connect for analysis.
|||I was wondering if anyone had a fix for this type of error/bug? Any hotfx or work-around?
Our IT Support contacted MS, but MS Production Support is not very responsive in this case. "Severe Error" still occurs in simple join. Any additional help would be much appreciated.
Sincerely,
-Lawrence
|||SP1 works fine, so I would NOT recommend applying SP2.
-Lawrence
Annoying Error Message
Hello,
I recently migrated from 2000 to 2005 database. On the new server, I kept getting this type of error message when there is some sort of join involved and they are not complex join either.
Error - "A severe error occurred on the current command. The results, if any, should be discarded."
ie
select A.Cookie1, A.Cookie2
into BookersLookers_Cookie
from BookersLookers_DataSet A
LEFT JOIN ActionSetup B on A.ActionID = B.ActionID
group by A.Cookie1, A.Cookie2
Is there something in the code where I could prevent this error from happening?
Any insight is appreciated,
-Lawrence
Moving to engine forum|||Hi SQLChamp,
What is the build of your SQL server?
What is the exact error message you are getting?
Any errors in theerror log?
Jag
|||Hello Jag Sandhu,
What is the build of your SQL server? 9.00.3042 (X64)
What is the exact error message you are getting? "A severe error occurred on the current command. The results, if any, should be discarded."
Any errors in theerror log? no error log set up
-Lawrence
|||You encounter an old bug which should have been fixed if you're running sp2.
http://support.microsoft.com/default.aspx/kb/910416
The error log we're interested in is the sql log. You can get it by executing the following sproc.
exec sp_readerrorlog
Please post the result of "select @.@.version" while you're at it.
|||I posted select @.@.version earlier - 9.00.3042 (X64)
I would find out any logs from sp_readerrorlog
If all else fails, I guess this is a big bug, and hopefully SP2 will fix it.
-Lawrence
|||Hi
From your buildnumber it seems like you are running SP2 already.
Let us know what you get from error logs.
Jag
|||Hello,
I received the following error message from sp_readerrorlop
2007-04-30 12:51:18.620 Server Error: 17310, Severity: 20, State: 1.
2007-04-30 12:51:18.620 Server A user request from the session with SPID 57 generated a fatal exception. SQL Server is terminating this session. Contact Product Support Services with the dump produced in the log directory.
|||You should probably do what the message says.
Contact Product Support Services with the dump produced in the log directory.
Or possibly post the .mdmp file on Connect for analysis.
|||I was wondering if anyone had a fix for this type of error/bug? Any hotfx or work-around?
Our IT Support contacted MS, but MS Production Support is not very responsive in this case. "Severe Error" still occurs in simple join. Any additional help would be much appreciated.
Sincerely,
-Lawrence
|||SP1 works fine, so I would NOT recommend applying SP2.
-Lawrence
annoying cut and paste behavior in query analyzer
is there a way to get rid of all those syntax such as
'COLLATE SQL_Latin1_General_CP1_CI_AS '
'[ ]' when cut and paste table script from EM to Query Analyzer?
CREATE TABLE [analyte_property] (
[ESPKEY] [int] NOT NULL ,
[method_1] [char] (35) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[method_2] [char] (35) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[method_3] [char] (35) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[method_4] [char] (35) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[method_5] [char] (35) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[method_6] [char] (35) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[method_7] [char] (35) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[method_8] [char] (35) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[method_9] [char] (35) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[method_10] [char] (35) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
) ON [PRIMARY]
GO
it would be ideal to have script look like this
CREATE TABLE analyte_property (
ESPKEY int NOT NULL ,
method_1 char (35) NULL ,
method_2 char (35) NULL ,
method_3 char (35) NULL ,
method_4 char (35) NULL ,
method_5 char (35) NULL ,
method_6 char (35) NULL ,
method_7 char (35) NULL ,
method_8 char (35) NULL ,
method_9 char (35) NULL ,
method_10 char (35) NULL
) ON PRIMARY
GO
Thank you.
Steve,
In the 'Generate sql script' dialog box 'formatting' tab, check the option
'Script sql7 comptatible feature'.
Dinesh
SQL Server MVP
--
SQL Server FAQ at
http://www.tkdinesh.com
"== Steve Pdx==" <lins@.nospam.portptld.com> wrote in message
news:Odzd0eSPEHA.252@.TK2MSFTNGP10.phx.gbl...
> background: sql2k on nt5 box..
>
> is there a way to get rid of all those syntax such as
> 'COLLATE SQL_Latin1_General_CP1_CI_AS '
> '[ ]' when cut and paste table script from EM to Query Analyzer?
>
> CREATE TABLE [analyte_property] (
> [ESPKEY] [int] NOT NULL ,
> [method_1] [char] (35) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [method_2] [char] (35) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [method_3] [char] (35) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [method_4] [char] (35) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [method_5] [char] (35) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [method_6] [char] (35) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [method_7] [char] (35) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [method_8] [char] (35) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [method_9] [char] (35) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [method_10] [char] (35) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
> ) ON [PRIMARY]
> GO
> it would be ideal to have script look like this
> CREATE TABLE analyte_property (
> ESPKEY int NOT NULL ,
> method_1 char (35) NULL ,
> method_2 char (35) NULL ,
> method_3 char (35) NULL ,
> method_4 char (35) NULL ,
> method_5 char (35) NULL ,
> method_6 char (35) NULL ,
> method_7 char (35) NULL ,
> method_8 char (35) NULL ,
> method_9 char (35) NULL ,
> method_10 char (35) NULL
> ) ON PRIMARY
> GO
>
> Thank you.
>
>
|||Thank you, but i already knew that.
that's how i got rid of 'COLLATE SQL_Latin1_General_CP1_CI_AS '
when i use EM, genrate sql script function.
i was referring highlight a table in EM, ctrl+c, and ctrl+p in QA.
besides, is there a way to get rid of those square brackets?
"Dinesh T.K" <tkdinesh@.nospam.mail.tkdinesh.com> wrote in message
news:evyaAmSPEHA.2716@.tk2msftngp13.phx.gbl...
> Steve,
> In the 'Generate sql script' dialog box 'formatting' tab, check the option
> 'Script sql7 comptatible feature'.
> --
> Dinesh
> SQL Server MVP
> --
> --
> SQL Server FAQ at
> http://www.tkdinesh.com
> "== Steve Pdx==" <lins@.nospam.portptld.com> wrote in message
> news:Odzd0eSPEHA.252@.TK2MSFTNGP10.phx.gbl...
>
|||I've seen this kind of post so many times...
When you use a GUI interface to do programming you get garbage - square
brackets, verbose collation - whatever...
Using EM to design tables, or create indexed is ACCESS-like. EM is the
"lowest common denominator"...
Our first-commandment in our shop is to script everything. We script BULK
INSERT's, INDEX creation, table creation - STORED PROC creation.
That means we create text files, with file extensions of .SQL and use those
in QA to make changes to the database. Not doing it this way would mean we
were a single-stop shop - not developing code for the hundreds of clients we
want to meet.
When we script a SPROC creation we even add the "GRANT..." permission code -
so that the proper DB role has access to the SPROC. If you use EM to output
a script of a SPROC it doesn't contain any GRANT info - that's weak.
Having a text file for every DB change or definition means we can use VISUAL
SOURCE safe... Yadda, yadda, yadda...
Sorry for ranting...
"== Steve Pdx==" <lins@.nospam.portptld.com> wrote in message
news:OzjrgJTPEHA.3096@.TK2MSFTNGP09.phx.gbl...[vbcol=seagreen]
> Thank you, but i already knew that.
> that's how i got rid of 'COLLATE SQL_Latin1_General_CP1_CI_AS '
> when i use EM, genrate sql script function.
> i was referring highlight a table in EM, ctrl+c, and ctrl+p in QA.
> besides, is there a way to get rid of those square brackets?
>
> "Dinesh T.K" <tkdinesh@.nospam.mail.tkdinesh.com> wrote in message
> news:evyaAmSPEHA.2716@.tk2msftngp13.phx.gbl...
option
>
|||Steve,
None that Iam aware of, except may be a mass replace of 'COLLATE ...' to ''
:-)
[vbcol=seagreen]
Why?In case, any of the column name was like 'method 1' then without [], the
script would fail.
Dinesh
SQL Server MVP
--
SQL Server FAQ at
http://www.tkdinesh.com
"== Steve Pdx==" <lins@.nospam.portptld.com> wrote in message
news:OzjrgJTPEHA.3096@.TK2MSFTNGP09.phx.gbl...[vbcol=seagreen]
> Thank you, but i already knew that.
> that's how i got rid of 'COLLATE SQL_Latin1_General_CP1_CI_AS '
> when i use EM, genrate sql script function.
> i was referring highlight a table in EM, ctrl+c, and ctrl+p in QA.
> besides, is there a way to get rid of those square brackets?
>
> "Dinesh T.K" <tkdinesh@.nospam.mail.tkdinesh.com> wrote in message
> news:evyaAmSPEHA.2716@.tk2msftngp13.phx.gbl...
option
>