Showing posts with label column. Show all posts
Showing posts with label column. Show all posts

Tuesday, March 27, 2012

Cant get linq to recognize that sproc has changed

I have a GridView that is populated by a stored procedure via linq. I changed the stored procedure to return an additional column, but I can't get the additional column to show up in the GridView. The GridView has autogeneratecolumns=true, so if I were using a SqlDataSource the new column would show up automagically. But now it is not showing up and I cannot reference it. I can't figure out how to make linq recognize that my stored procedure has changed.

I have tried deleting the sproc definition from the dbml file, making sure the partial class is removed the .vb file, refreshing the stored procedures list in database explorer, and adding the sproc definition back again (a hassle), but still it insists on defining only the original columns and not the new column.

How do get linq to get back in sync with the database?

Is there any way to get linq back in sync after database changes??

sql

Cant Get Decimal to Show Up!

Hey...I've been cracking head about this one all day, and I'm sure it's
an easy answer, but here goes:

I have a column entitled Sequ which is defined as a Decimal with
Precision 10 and Scale 5. In an ASP.NET page, I'm performing a
calculation which is inserted into the Sequ column. I'm doing a
response.write, which enables me to see that the numbers are in fact
being calculated correctly complete with a string of numbers after the
decimal, but when I pull the numbers out to use on the page or access
the table, Sequ lists all numbers as integers. I'm using a stored
procedure in which I'm declaring the @.calc parameter (the calculated
value which is being inserted) as a Decimal -- I don't know what I'm
doing wrong. Help appreciated...thanks.

ErikOn 2 Jan 2005 12:27:13 -0800, erikthenomad@.hotmail.com wrote:

(snip)
> I'm using a stored
>procedure in which I'm declaring the @.calc parameter (the calculated
>value which is being inserted) as a Decimal

Hi Erik,

Like this

DECLARE @.calc decimal (10,5)

or like this

DECLARE @.calc decimal

In the latter case, you've declared the variable with default scale (18)
and default precision (0).

In the former case, there must be something else wrong. It might help if
you post the complete code for the stored procedure.

Best, Hugo
--

(Remove _NO_ and _SPAM_ to get my e-mail address)|||Hugo,

Thanks for the help -- I ended up trying "smallmoney" while awaiting a
response, which did the trick quite nicely (I don't need more than four
decimal places); are there any drawbacks to that? I think the issue
was that I had not specifically declared the precision of the decimal,
thinking it would be taken care of already since I had already declared
it in the table design.

Erik|||On 2 Jan 2005 13:47:52 -0800, erikthenomad@.hotmail.com wrote:

>Hugo,
>Thanks for the help -- I ended up trying "smallmoney" while awaiting a
>response, which did the trick quite nicely (I don't need more than four
>decimal places); are there any drawbacks to that?

Hi Erik,

The value will be implicitly ocnverted from decimal (10,5) to smallmoney.
You'll lose the fifth decimal, as smallmoney and money have a precision of
four decimal places. The maximum for smallmoney is over 200,000, so you
won't have problems with that. The conversion will have a very small
effect on performance. If the parameter is used in a WHERE clause and
compared to a decimal column, the parameter will be converted back.

The main drawback is that you'll fail to understand the logic when you
have to get back to the code, several months from now.

> I think the issue
>was that I had not specifically declared the precision of the decimal,
>thinking it would be taken care of already since I had already declared
>it in the table design.

If you declare a variable, SQL Server can't know if you'll use it to store
values from a column, let alone know for which column. So SQL Server will
just use the defaults if you don't specify precision.

Best, Hugo
--

(Remove _NO_ and _SPAM_ to get my e-mail address)sql

Sunday, March 25, 2012

can't force index

Hi,
I have a table called "users"
basically, it has a column called "user_id",
and there is a clustered index called "user_id_index",
so everytimes I try to execute
select * from users
I look at the execution plan, it's always used user_id_index, which is
fine.
however, I added another index called "user_name_index"
then I execute:
select * from users (index=user_name_index)
now I look at the execution plan, it is still using "user_id_index" ,
obviously,
it didn't force the index. what happen? why sql server ignore my index
hint?
how do you solve this problem?Recall that the clustered index holds the data as well as the key
columns. Unless the index "user_name_index" contains all the columns of
the table the server still has to read the clustered index to retrieve
the data. You should see a bookmark lookup on the cluster key.
Why do you see this as a problem? Why are you attempting to force an
index hint? Why are you using SELECT *, which potentially hinders index
optimization and shouldn't be used at all in production code.
David Portas
SQL Server MVP
--|||Hi
An not supplying a WHERE clause results in SQL server doing a table scan so
indexes may not be used (why use an index when you are returning all the
data?).
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
news:1108762629.179991.114690@.o13g2000cwo.googlegroups.com...
> Recall that the clustered index holds the data as well as the key
> columns. Unless the index "user_name_index" contains all the columns of
> the table the server still has to read the clustered index to retrieve
> the data. You should see a bookmark lookup on the cluster key.
> Why do you see this as a problem? Why are you attempting to force an
> index hint? Why are you using SELECT *, which potentially hinders index
> optimization and shouldn't be used at all in production code.
> --
> David Portas
> SQL Server MVP
> --
>|||user_name_index is used on "user_name" column,
but even when I execute
select user_name from users (index=user_name_index)
where user_name='Joe'
I still see the execution plan is using clustered index "user_id_index".
don't you think it's weird?
"David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
news:1108762629.179991.114690@.o13g2000cwo.googlegroups.com...
> Recall that the clustered index holds the data as well as the key
> columns. Unless the index "user_name_index" contains all the columns of
> the table the server still has to read the clustered index to retrieve
> the data. You should see a bookmark lookup on the cluster key.
> Why do you see this as a problem? Why are you attempting to force an
> index hint? Why are you using SELECT *, which potentially hinders index
> optimization and shouldn't be used at all in production code.
> --
> David Portas
> SQL Server MVP
> --
>|||> don't you think it's weird?
No. Did you read the replies from Mike and myself?
If you explain what you want to achieve maybe we can help you better.
If you just want to understand indexes and hints then I recommend Kalen
Delaney's book "Inside SQL Server". Hints are an advanced feature and
should be used only when essential and with an understanding of their
effects on the query plan.
David Portas
SQL Server MVP
--|||> select user_name from users (index=user_name_index)
> where user_name='Joe'
> I still see the execution plan is using clustered index
"user_id_index".
Maybe you could post some runnable code to reproduce that behaviour
(CREATE..., INSERT..., SELECT...). I don't see that myself. I get
user_name_index used with or without the hint. Also tell us your
edition, version and SP level.
David Portas
SQL Server MVP
--|||1) How big is the table? How many pages (blocks of 8 Kilobyte) does the
table use? You can check this with "sp_spaceused".
2) What version of SQL-Server are you using
3) Please post (simplified) DDL and some sample rows, and preferably a
script to reproduce the behavior.
Gert-Jan
Britney wrote:
> user_name_index is used on "user_name" column,
> but even when I execute
> select user_name from users (index=user_name_index)
> where user_name='Joe'
> I still see the execution plan is using clustered index "user_id_index".
> don't you think it's weird?|||Oh my god, sorry guys.
I was wrong about it, "users" is not a table, but it's a view.
I just found out.
In case you ask me why i'm doing this stupid view:
the reason we create a view for this is because I want to do snapshot
isolation for read and write.
if there are data coming in to [2users] table, then I alter view to use
[1users] table. So users table have 2 tables:
Read and write. This way I don't worry about locking.
CREATE VIEW users
AS
select * from [2users]
--
Now We know what is happening...
IF I select from actual table ,
select user_name from [2users] (index=user_name_index)
where user_name='Joe'
I see that it 's using forced index.
I guess view doesn't work correctly for some reason.
---
sp_spaceused [2users]
result:
name rows reserved data index_size unused
[2users] 41892 13952 KB 6616 KB 7144 KB 192 KB
---
select @.@.version
result:
Microsoft SQL Server 2000 - 8.00.780 (Intel X86) Mar 3 2003 10:28:28
Copyright (c) 1988-2003 Microsoft Corporation Enterprise Edition on Windows
NT 5.0 (Build 2195: Service Pack 4)
----

> 1) How big is the table? How many pages (blocks of 8 Kilobyte) does the
> table use? You can check this with "sp_spaceused".
>
"Gert-Jan Strik" <sorry@.toomuchspamalready.nl> wrote in message
news:42166EA6.EA6900C3@.toomuchspamalready.nl...
> 1) How big is the table? How many pages (blocks of 8 Kilobyte) does the
> table use? You can check this with "sp_spaceused".
> 2) What version of SQL-Server are you using
> 3) Please post (simplified) DDL and some sample rows, and preferably a
> script to reproduce the behavior.
> Gert-Jan
>
> Britney wrote:

Tuesday, March 20, 2012

Cant Export Fields w/ over 255 characters

I have a query where I am trying to export a column that has around
2000 characters and it is truncated. What is not setup in MS SQL
correctly so that DTS will export the csv correctly.

Thanks,
John<quinniii@.yahoo.com> wrote in message
news:1109013962.276779.70380@.o13g2000cwo.googlegro ups.com...
>I have a query where I am trying to export a column that has around
> 2000 characters and it is truncated. What is not setup in MS SQL
> correctly so that DTS will export the csv correctly.
> Thanks,
> John

This might be the issue:

http://www.sqldts.com/?297

If that doesn't help, then I suggest you post more information - what
version of MSSL, what data source and target types do you have, what sort of
task/transformation are you using to move the data etc.

Simon

Can't enter data

I keep getting the error message "the value you entered is not consistent
with the data type or length of the column" I verified that I am entering the
correct datatype and length. Any ideas why it is not allowing me to enter
data.
Thanks...Hi,
Can you please post the table structure and the data you are trying to
insert.
Thanks
Hari
MCDBA
"mmc" <mmc@.discussions.microsoft.com> wrote in message
news:8D43843F-621B-4BCD-A855-917F592DB1D5@.microsoft.com...
> I keep getting the error message "the value you entered is not consistent
> with the data type or length of the column" I verified that I am entering
the
> correct datatype and length. Any ideas why it is not allowing me to enter
> data.
> Thanks...

Can't enter data

I keep getting the error message "the value you entered is not consistent
with the data type or length of the column" I verified that I am entering the
correct datatype and length. Any ideas why it is not allowing me to enter
data.
Thanks...
Hi,
Can you please post the table structure and the data you are trying to
insert.
Thanks
Hari
MCDBA
"mmc" <mmc@.discussions.microsoft.com> wrote in message
news:8D43843F-621B-4BCD-A855-917F592DB1D5@.microsoft.com...
> I keep getting the error message "the value you entered is not consistent
> with the data type or length of the column" I verified that I am entering
the
> correct datatype and length. Any ideas why it is not allowing me to enter
> data.
> Thanks...

Can't enter data

I keep getting the error message "the value you entered is not consistent
with the data type or length of the column" I verified that I am entering th
e
correct datatype and length. Any ideas why it is not allowing me to enter
data.
Thanks...Hi,
Can you please post the table structure and the data you are trying to
insert.
Thanks
Hari
MCDBA
"mmc" <mmc@.discussions.microsoft.com> wrote in message
news:8D43843F-621B-4BCD-A855-917F592DB1D5@.microsoft.com...
> I keep getting the error message "the value you entered is not consistent
> with the data type or length of the column" I verified that I am entering
the
> correct datatype and length. Any ideas why it is not allowing me to enter
> data.
> Thanks...

Can't drop constraint

Hello,
I am using SS7.
I have a table with a column "yn_LockOut" with a default set to 0. I tried
to delete the column and got the following:
'tblGuidelineResponseOwner' table
- Error modifying column properties for 'yn_LockOut'.
ODBC error: [Microsoft][ODBC SQL Server Driver][SQL
Server]'DF_tblGuideLineResponse_yn_LockOut' is not a constraint.
[Microsoft][ODBC SQL Server Driver][SQL Server]Could not drop constraint.
See previous errors.
When I ran exec sp_helpconstraint 'tblGuidelineResponseOwner', I got:
constraint_type constraint_name
DEFAULT on column yn_LockOut DF_tblGuideLineResponse_yn_LockOut
I ran:
alter table tblGuidelineResponseOwner DROP CONSTRAINT
DF_tblGuideLineResponse_yn_LockOut
and got the error message:
'DF_tblGuideLineResponse_yn_LockOut' is not a constraint.
Server: Msg 3727, Level 16, State 1, Line 5
Could not drop constraint. See previous errors.
Any help with this would be appreciated.
--
Thanks in advance,
StevenSteven,
Check sp_unbindefault in BOL.
-Mark
This positing is as is
>--Original Message--
>Hello,
>I am using SS7.
>I have a table with a column "yn_LockOut" with a default
set to 0. I tried
>to delete the column and got the following:
>'tblGuidelineResponseOwner' table
>- Error modifying column properties for 'yn_LockOut'.
>ODBC error: [Microsoft][ODBC SQL Server Driver][SQL
>Server]'DF_tblGuideLineResponse_yn_LockOut' is not a
constraint.
>[Microsoft][ODBC SQL Server Driver][SQL Server]Could not
drop constraint.
>See previous errors.
>
>When I ran exec
sp_helpconstraint 'tblGuidelineResponseOwner', I got:
>constraint_type
constraint_name
>DEFAULT on column yn_LockOut
DF_tblGuideLineResponse_yn_LockOut
>
>I ran:
>alter table tblGuidelineResponseOwner DROP CONSTRAINT
>DF_tblGuideLineResponse_yn_LockOut
>and got the error message:
>'DF_tblGuideLineResponse_yn_LockOut' is not a constraint.
>Server: Msg 3727, Level 16, State 1, Line 5
>Could not drop constraint. See previous errors.
>Any help with this would be appreciated.
>--
>Thanks in advance,
>Steven
>
>.
>sql

Monday, March 19, 2012

Can't Drop Column or Shrink tran log

I am having problem getting the transaction log of a db
that is being replicated to shrink even after performing
a backup. There is about 500 MB of data in the database
and the trans log is almost 4GB. Dbcc checkdb returned no
error but dbcc opentran returned the following message:
Replicated Transaction Information:
Oldest distributed LSN : (0:0:0)
Oldest non-distributed LSN : (305:22434:1)
DBCC execution completed. If DBCC printed error messages,
contact your system administrator.
Also on this particular db, I can't add/drop column using
EM on the published db but can add column using
sp_repladdcolumn. When I tried to drop a column using
sp_repldropcolumn, I get the following message: "ALTER
TABLE DROP COLUMN failed because 'FieldName' is currently
replicated."
The column that I can't drop is not a pk or part of a pk
and there are no constraints. There is a relationship
between this table and another table on another column.
Thanks
Emma
Emma,
I remember this thread from a while back!
It seems you have some transactions (1 or more) in the log which have not
been read by a log reader. Run your log reader agent for all publications in
this database, then turn them off and make sure no-one else can modify the
data. Run sp_replcmds to see if this returns anything else. If it does then
consider running sp_repldone. This will mark the log as having been read
completely, and you'll then be able to back it up and shrink it.
First, have a look in BOL for the 2 sps above to familiarize yourself with
them. Also, make sure you have a backup before attempting this fix.
HTH,
Paul Ibison
|||Paul,
According to BOL, sp_replcmds and sp_repldone are for
transactional replication. I am running a merge
replication. Does this make any difference?
Thanks
Emma

>--Original Message--
>Emma,
>I remember this thread from a while back!
>It seems you have some transactions (1 or more) in the
log which have not
>been read by a log reader. Run your log reader agent for
all publications in
>this database, then turn them off and make sure no-one
else can modify the
>data. Run sp_replcmds to see if this returns anything
else. If it does then
>consider running sp_repldone. This will mark the log as
having been read
>completely, and you'll then be able to back it up and
shrink it.
>First, have a look in BOL for the 2 sps above to
familiarize yourself with
>them. Also, make sure you have a backup before
attempting this fix.
>HTH,
>Paul Ibison
>
>.
>
|||Emma,
Yes! I assumed it was transactional, as the output of dbcc opentran looks
like that of transactional. Have you perhaps ever set up transactional
previously then disabled it? If so, please try running sp_repldone then see
if dbcc opentran reports anything different - hopefully no transactions will
be reported. Only do this if you don't have any current transactional
publications based on this database. As a last resort (backup your databases
before trying this), you could detach the database, delete the transaction
log, then reattach the database and a new empty log should be created for
you (I have had cause to do this previously and it worked).
HTH,
Paul Ibison
|||Paul,
Thanks for all your help. I have tried everything and
nothing is working. sp_repldone tells me that the
database is not published. I could not detach the
database because it is being replicated. I even stopped
the SQL services and deleted the log file, but this
generated an error when I restarted the database so I had
to restore the file. I disabled publication on the
database and tried to delete a column, and it tells me
that I can't delete because the table is being
replicated. Is my database corrupt? How can I get around
these problems? The two things I am trying to do is drop
a column from a published table and shrink my log file.
Any suggestions as to what I can do?
Thanks
Emma

>--Original Message--
>Emma,
>Yes! I assumed it was transactional, as the output of
dbcc opentran looks
>like that of transactional. Have you perhaps ever set up
transactional
>previously then disabled it? If so, please try running
sp_repldone then see
>if dbcc opentran reports anything different - hopefully
no transactions will
>be reported. Only do this if you don't have any current
transactional
>publications based on this database. As a last resort
(backup your databases
>before trying this), you could detach the database,
delete the transaction
>log, then reattach the database and a new empty log
should be created for
>you (I have had cause to do this previously and it
worked).
>HTH,
>Paul Ibison
>
>.
>
|||Emma,
try running
exec sp_dboption - to see the current settings
exec sp_dboption 'pubs','published',false
exec sp_dboption 'pubs','merge publish',false
This should allow you to detach the database and remove the log.
For the table, if it is still a problem, there is a stored procedure to do
this called sp_MSunmarkreplinfo which takes a tablename as a parameter.
Alternatively, setting replinfo to 0 in sysobjects for the particular table
should do it. Finally, running sp_removedbreplication can be used to remove
all traces of replication in the database, but obviously must only be done
if this database is not also configured as a publisher.
Regards,
Paul Ibison
|||Paul,
Thanks again for your help. I will try what you suggested
and hope for the best. The db is also configured as a
publisher and here is the result of sp_dboption.
ANSI null default
ANSI nulls
ANSI padding
ANSI warnings
arithabort
auto create statistics
auto update statistics
autoclose
autoshrink
concat null yields null
cursor close on commit
db chaining
dbo use only
default to local cursor
merge publish
numeric roundabort
offline
published
quoted identifier
read only
recursive triggers
select into/bulkcopy
single user
subscribed
torn page detection
trunc. log on chkpt.

>--Original Message--
>Emma,
>try running
>exec sp_dboption - to see the current settings
>exec sp_dboption 'pubs','published',false
>exec sp_dboption 'pubs','merge publish',false
>This should allow you to detach the database and remove
the log.
>For the table, if it is still a problem, there is a
stored procedure to do
>this called sp_MSunmarkreplinfo which takes a tablename
as a parameter.
>Alternatively, setting replinfo to 0 in sysobjects for
the particular table
>should do it. Finally, running sp_removedbreplication
can be used to remove
>all traces of replication in the database, but obviously
must only be done
>if this database is not also configured as a publisher.
>Regards,
>Paul Ibison
>
>.
>

Can't drop column

I have a couple of questions relating to a merge
replication database.
1.I was unable to add a column to a table being
replicated using EM, but succeeded using
sp_repladdcolumn. When I tried to drop another column
using sp_repldropcolumn, I get the following
message: "ALTER TABLE DROP COLUMN failed
because 'FieldName' is currently replicated.
2.If I am unable to use EM to drop/add columns,
does this mean that my db is corrupt? How can I check if
my db is corrupt and which tools can I use?
3.The transaction log for a replicated db keeps
growing. The database is about 500 MB and the transaction
log is almost 4GB. I performed a complete backup and even
tried to shrink the log manually but the size did not
change.
I will appreciate any help I can get in resolving these
problems.
Thanks
Emma
is this column a pk, or part of a pk? are there and contraints on this
column?
It is unlikely your database is corrupt. Database base corrpuption errors
normally show up when you query a page telling you a page of the table or
index is inaccessible, your database is inaccessible. To check this run dbcc
checkdb
Regarding your ever expanding database, run dbcc open tran and see if there
are any old open transactions. If so figure out what they are doing and
evaluate killing them. The consider switching to the simple recovery model
and trying to shrink the database again several times. This will cause
locking so it is best to do this off hours. After you do this run a backup,
and then switch back to the full model.
"Emma" <eeemore@.hotmail.com> wrote in message
news:175b001c418c1$10cd47e0$a501280a@.phx.gbl...
> I have a couple of questions relating to a merge
> replication database.
> 1. I was unable to add a column to a table being
> replicated using EM, but succeeded using
> sp_repladdcolumn. When I tried to drop another column
> using sp_repldropcolumn, I get the following
> message: "ALTER TABLE DROP COLUMN failed
> because 'FieldName' is currently replicated.
> 2. If I am unable to use EM to drop/add columns,
> does this mean that my db is corrupt? How can I check if
> my db is corrupt and which tools can I use?
> 3. The transaction log for a replicated db keeps
> growing. The database is about 500 MB and the transaction
> log is almost 4GB. I performed a complete backup and even
> tried to shrink the log manually but the size did not
> change.
> I will appreciate any help I can get in resolving these
> problems.
> Thanks
> Emma
>
|||Hilary,
Thanks for your response. dbcc checkdb returned no error.
dbbc opentran returned the following and I don't know
what to do with it.
Replicated Transaction Information:
Oldest distributed LSN : (0:0:0)
Oldest non-distributed LSN : (305:22434:1)
Thanks
Emma

>--Original Message--
>is this column a pk, or part of a pk? are there and
contraints on this
>column?
>It is unlikely your database is corrupt. Database base
corrpuption errors
>normally show up when you query a page telling you a
page of the table or
>index is inaccessible, your database is inaccessible. To
check this run dbcc
>checkdb
>Regarding your ever expanding database, run dbcc open
tran and see if there
>are any old open transactions. If so figure out what
they are doing and
>evaluate killing them. The consider switching to the
simple recovery model
>and trying to shrink the database again several times.
This will cause
>locking so it is best to do this off hours. After you do
this run a backup,
>and then switch back to the full model.
>"Emma" <eeemore@.hotmail.com> wrote in message
>news:175b001c418c1$10cd47e0$a501280a@.phx.gbl...
if
transaction
even
>
>.
>
|||Hilary,
In response to your first question, the column that I
can't drop is not a pk or part of a pk and there are no
constraints. There is a relationship between this table
and another table on another column.
Thanks
Emma

>--Original Message--
>is this column a pk, or part of a pk? are there and
contraints on this
>column?
>It is unlikely your database is corrupt. Database base
corrpuption errors
>normally show up when you query a page telling you a
page of the table or
>index is inaccessible, your database is inaccessible. To
check this run dbcc
>checkdb
>Regarding your ever expanding database, run dbcc open
tran and see if there
>are any old open transactions. If so figure out what
they are doing and
>evaluate killing them. The consider switching to the
simple recovery model
>and trying to shrink the database again several times.
This will cause
>locking so it is best to do this off hours. After you do
this run a backup,
>and then switch back to the full model.
>"Emma" <eeemore@.hotmail.com> wrote in message
>news:175b001c418c1$10cd47e0$a501280a@.phx.gbl...
if
transaction
even
>
>.
>

Sunday, March 11, 2012

Cant delete records from DB .. says : Too many rows were affected by update.

Hi,

I've added multiple records with same info during practice. Now I"m trying to delete those records from SQL Server DB. but it says

"Key column information is insufficient or incorrect. To many rows were affected by update."

What to do, to delete these records?

Appreciated..if you post the code we can help you out but the error message you're getting means you're trying to update a row that has a duplicate. So, if your table had a primary key, you couldn't have a duplicate row and wouldn't have a problem.

hth|||Add a new column to your table and give each record a unique value for the column. Quickest way to do this is to add an Indentity column to the table (SQL Server will add the unique values). Delete the records you don't want and then you can remove the Indentity column if you want.

Thursday, March 8, 2012

Can''t Create Recursive Function

Greetings.

I'm having trouble creating a recursive function in T-SQL (SQL Server 2000).

I've got a table that has an ID column and a ParentID column. Each row can have a value in the ParentID column that references the ID column of another record - I'll call such rows "child records". I'll cal the row referenced by the ParentID the "parent record".

Each child record can itself have another child record.

I need a function that will take an ID column value as a parameter, and walk up the chain of parent records until I get the first record in the series and return that record's ID value. I'll call that record the "UrParent record".

I'm trygin to create a recursive function called ufunc_ST_GetUrParentCertNum. In the function, there is of course a recursive call to itself - GetUrParentCertNum. However, when I try to run the CREATE FUNCTION script, I get the error:

Server: Msg 195, Level 15, State 10, Procedure ufunc_ST_GetUrParentCertNum, Line 26
'ufunc_ST_GetUrParentCertNum' is not a recognized function name.

I tried the same thing with a Stored Procedure, and that worked fine. However, I really want this to work as a function.

Does anyone have advice on how I can achieve this?

Thanks in advance.

- will f

Did you forget to put the schema name before the function name,

try to call your function like dbo.ufunc_ST_GetUrParentCertNum

|||

That did the trick exactly. Thank you very much Smile

- will f

Saturday, February 25, 2012

Can't create a secondary index on a bit column

I am trying to create a secondary index on mult. columns
and one of them is a bit (I know you should not but that
is what they want). Using Enterprise Manager this does not
work, I get the following error:
Primary key or index cannot be created on column '<0s>'
because its data type is 'bit'.
If I use Query Analyzer it works fine.
My question is: Is this a bug in Enterprise Manager that
it does not let me or a bug in Query Analyzer that it does
let me?
Thanks.Don't know where is wrong. But you can check with QA's execution plan if
you have adequate data set -- use a situation that the new index is suitable
and run a query to see whether ther index is used. Or you can also sysindex
and sysindexkeys to look at.
Quentin
"Amy" <anonymous@.discussions.microsoft.com> wrote in message
news:030c01c3daf1$218abda0$a601280a@.phx.gbl...
quote:

> I am trying to create a secondary index on mult. columns
> and one of them is a bit (I know you should not but that
> is what they want). Using Enterprise Manager this does not
> work, I get the following error:
> Primary key or index cannot be created on column '<0s>'
> because its data type is 'bit'.
> If I use Query Analyzer it works fine.
> My question is: Is this a bug in Enterprise Manager that
> it does not let me or a bug in Query Analyzer that it does
> let me?
> Thanks.
|||Amy
There are some (using your word 'bugs') with EM
It does work with QA so be happy.
Note: You are right that it's a bad idea to have index on 'bit' column.
"Amy" <anonymous@.discussions.microsoft.com> wrote in message
news:030c01c3daf1$218abda0$a601280a@.phx.gbl...
quote:

> I am trying to create a secondary index on mult. columns
> and one of them is a bit (I know you should not but that
> is what they want). Using Enterprise Manager this does not
> work, I get the following error:
> Primary key or index cannot be created on column '<0s>'
> because its data type is 'bit'.
> If I use Query Analyzer it works fine.
> My question is: Is this a bug in Enterprise Manager that
> it does not let me or a bug in Query Analyzer that it does
> let me?
> Thanks.

Can't create a secondary index on a bit column

I am trying to create a secondary index on mult. columns
and one of them is a bit (I know you should not but that
is what they want). Using Enterprise Manager this does not
work, I get the following error:
Primary key or index cannot be created on column '<0s>'
because its data type is 'bit'.
If I use Query Analyzer it works fine.
My question is: Is this a bug in Enterprise Manager that
it does not let me or a bug in Query Analyzer that it does
let me?
Thanks.Don't know where is wrong. But you can check with QA's execution plan if
you have adequate data set -- use a situation that the new index is suitable
and run a query to see whether ther index is used. Or you can also sysindex
and sysindexkeys to look at.
Quentin
"Amy" <anonymous@.discussions.microsoft.com> wrote in message
news:030c01c3daf1$218abda0$a601280a@.phx.gbl...
> I am trying to create a secondary index on mult. columns
> and one of them is a bit (I know you should not but that
> is what they want). Using Enterprise Manager this does not
> work, I get the following error:
> Primary key or index cannot be created on column '<0s>'
> because its data type is 'bit'.
> If I use Query Analyzer it works fine.
> My question is: Is this a bug in Enterprise Manager that
> it does not let me or a bug in Query Analyzer that it does
> let me?
> Thanks.|||Amy
There are some (using your word 'bugs') with EM
It does work with QA so be happy.
Note: You are right that it's a bad idea to have index on 'bit' column.
"Amy" <anonymous@.discussions.microsoft.com> wrote in message
news:030c01c3daf1$218abda0$a601280a@.phx.gbl...
> I am trying to create a secondary index on mult. columns
> and one of them is a bit (I know you should not but that
> is what they want). Using Enterprise Manager this does not
> work, I get the following error:
> Primary key or index cannot be created on column '<0s>'
> because its data type is 'bit'.
> If I use Query Analyzer it works fine.
> My question is: Is this a bug in Enterprise Manager that
> it does not let me or a bug in Query Analyzer that it does
> let me?
> Thanks.

cant convert char to varchar

I upgraded my datbase from 6.5 to 2000. Now I have primary key in one table datatype CHAR(5) and second table with a column having datatype VARCHAR(5).

I am trying to create a foreign key from second to first table but I am getting following error.

"Column 'dbo.dma.dma' is not the same data type as referencing column 'fone.dma' in foreign key 'FK_fone_dma'."

But I beleive CHAR to VARCHAR conversion is implicit.

COuld anyone please enlighten this problem.

Thanks in advance..The conversion is implicit, but the server won't do conversions for foreign keys. The two must be the same datatype.

You can choose to maintain relational integrity through the use of triggers, but you are better off making the field types uniform.

blindman|||If the table was created with ANSI_PADDING OFF, then you can drop the foreign key, alter table alter column <col_name> char(5) NULL + any defaults or check constraints. With ANSI_PADDING OFF the way data is stored in CHAR datatype is the same as for VARCHAR, providing that the column allows NULLs. After you altered the column re-create your foreign key and you're done.|||Cool. :cool:

blindman|||Doesn't appear so in RI

USE Northwind
GO

CREATE TABLE myTable99 (Col1 char(5) NOT NULL PRIMARY KEY, Col2 int DEFAULT 0)
GO

CREATE TABLE myTablexx (Col1 char(5) NOT NULL PRIMARY KEY, Col2 int DEFAULT 0)
ALTER TABLE myTablexx ADD CONSTRAINT myTablexx_FK1 FOREIGN KEY (Col1) REFERENCES myTable99 (Col1)
GO

CREATE TABLE myTable00 (Col1 varchar(5) NOT NULL PRIMARY KEY, Col2 int DEFAULT 0)
ALTER TABLE myTable00 ADD CONSTRAINT myTable00_FK1 FOREIGN KEY (Col1) REFERENCES myTable99 (Col1)
GO

DROP TABLE myTablexx
DROP TABLE myTable00
DROP TABLE myTable99
GO|||Uncool. :confused:
blindman|||Originally posted by rdjabarov
providing that the column allows NULLs.

Nulls...pk...hmmmmmmmmm

USE Northwind
GO

CREATE TABLE myTable99 (Col1 char(5) NOT NULL PRIMARY KEY, Col2 int DEFAULT 0)
GO

CREATE TABLE myTablexx (Col1 char(5) NOT NULL PRIMARY KEY, Col2 int DEFAULT 0)
ALTER TABLE myTablexx ADD CONSTRAINT myTablexx_FK1 FOREIGN KEY (Col1) REFERENCES myTable99 (Col1)
GO

CREATE TABLE myTable00 (Col1 varchar(5) NOT NULL CONSTRAINT myTable00_PK PRIMARY KEY, Col2 int DEFAULT 0)
-- Will Fail
ALTER TABLE myTable00 ADD CONSTRAINT myTable00_FK1 FOREIGN KEY (Col1) REFERENCES myTable99 (Col1)
ALTER TABLE myTable00 ADD Col3 char(5) NULL
UPDATE myTable00 SET Col3 = Col1
ALTER TABLE myTable00 DROP CONSTRAINT myTable00_PK
ALTER TABLE myTable00 DROP COLUMN Col1
ALTER TABLE myTable00 ADD Col1 char(5) NULL
UPDATE myTable00 SET Col1 = Col3
ALTER TABLE myTable00 ALTER COLUMN Col1 char(5) NOT NULL
ALTER TABLE myTable00 ADD CONSTRAINT myTable00_PK PRIMARY KEY (Col1)
-- Will NOT Fail
ALTER TABLE myTable00 ADD CONSTRAINT myTable00_FK1 FOREIGN KEY (Col1) REFERENCES myTable99 (Col1)
GO

DROP TABLE myTablexx
DROP TABLE myTable00
DROP TABLE myTable99
GO|||Originally posted by rdjabarov
If the table was created with ANSI_PADDING OFF, then you can drop the foreign key, alter table alter column <col_name> char(5) NULL + any defaults or check constraints. With ANSI_PADDING OFF the way data is stored in CHAR datatype is the same as for VARCHAR, providing that the column allows NULLs. After you altered the column re-create your foreign key and you're done.

Thanks.

But the same structure (PK CHAR / FK VARCHAR) is working fine in existing DB of 6.5.
How?|||Good old M$...

They forgot to forward engineer that "feature"...

Hey I still think that this is wrong...had lots of disagreements about it though...

USE Northwind
GO

DECLARE @.x int
CREATE TABLE myTable99(Col1 datetime)
SELECT @.x = 0
INSERT INTO myTable99(Col1) SELECT @.x
SELECT * FROM myTable99
GO

DROP TABLE myTable99
GO

In DB2, Oracle, ect, that breaks the rules...SQL Server loosley defines that 0 is a valid datatype for datetime...blew my mind when I found this out...

Oh, and btw, if it's a PK, you won't be able to do the ANSI thing (I don't thin)

If you can, post a sample...|||So I assume it worked the second time, Brett?

Cool? Uncool?|||If you mean the second alter, yes it works...cut and paste it in to QA and watch the test...

THE ANSI_PADDINGS thing I'll have to play with it...

But since a PK can't be NULL the point is probably moot...

You know what a faster way would be (but probably more resource intensive..)

CREATE TABLE myTable007 (Col1 char(5) NOT NULL CONSTRAINT myTable007_PK PRIMARY KEY, Col2 int DEFAULT 0)
INSERT INTO myTable007 (Col1, Col2) SELECT Col1, Col2 FROM myTable00
DROP TABLE myTable00
SELECT Col1, Col2 INTO myTable00 FROM myTable007
DROP TABLE myTable007
ALTER TABLE myTable00 ADD CONSTRAINT myTable00_FK1 FOREIGN KEY (Col1) REFERENCES myTable99 (Col1)
GO|||PK cannot allow NULLs, so Brett's experiment is not exactly demonstrating what I was talking about.

set ansi_padding off
go
create table parent (f1 char(5) not null primary key, f2 int null)
go
create table child (f1 char(5) null, f2 int null)
go
alter table child add constraint fk_child2parent foreign key (f1) references parent(f1)
go|||...And if you insert 'A' into PARENT and CHILD tables and select DATALENGTH(f1) from both, you'd see that the result on PARENT is 5 while on CHILD is 1. Thus, the behavior of VARCHAR datatype in CHILD table.|||Originally posted by nmajeed
I upgraded my datbase from 6.5 to 2000. Now I have primary key in one table datatype CHAR(5) and second table with a column having datatype VARCHAR(5).

But that was the original question...

PK cannot allow NULLs, so Brett's experiment is not exactly demonstrating what I was talking about.

But that's neat...

EDIT: But I rarely play with settings, because it can be dangerous...you have to remeber that you have a particular setting, and code for it...

If I do change a setting, it's always in the context of a transacxtion, and is set back at the conclusion...|||So what are your settings in the QA when you're about to create a table, for example? Or you're creating tables in EM?