Invoke a Web Service Method From A SQL Stored Procedure

No this wasn’t my idea. The company I am currently am contracting for has quite a bit of its infrastructure wrapped up in web services. I needed to invoke a method of one of these services in a stored procedure. So this is the story of how to do that….

First we need to modify the web.config file for your web service. This code should go under .

<webServices>
       <protocols>
         <add name="HttpGet"/>
         <add name="HttpPost"/>
       </protocols>
     </webServices>

Next we need to make sure your SQL Server will allow you to make this kind of call. You will need elevated permissions to run this script. Before you allow your SQL Server to access functions outside of SQL Server consider this. But since I had no choice in this situation in your SQL Server Management Studio query window paste the following:

exec sp_configure 'show advanced options', 1
go
reconfigure
go
exec sp_configure 'Ole Automation Procedures', 1 -- Enable
-- exec sp_configure 'Ole Automation Procedures', 0 -- Disable
go
reconfigure
go
exec sp_configure 'show advanced options', 0
go
reconfigure
go

Next we will want to write the stored procedure itself. Note in your web service address it is important to end the address with a question mark. (?).

USE [DATABASENAMEHERE]
GO

/****** Object:  StoredProcedure [dbo].[Web_Service_Invoke]     ******/
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

-- =============================================
-- Author:        <Kelly Martens>
-- Create date: <4/23/2018>
-- Description:    <Stored Procedure to invoke web service method>
-- =============================================
CREATE PROCEDURE [dbo].[Web_Service_Invoke]
     -- Add the parameters for the stored procedure here
    
AS
DECLARE @obj Integer
DECLARE @sUrl varchar(200)
DECLARE @response varchar(8000)
SET @sURL = 'http://webserviceaddress.asmx/MethodName?'
EXEC sp_OACreate 'MSXML2.ServerXMLhttp',@obj OUT
EXEC sp_OAMethod @obj,'Open',NULL,'GET',@sURL,false
EXEC sp_OAMethod @obj,'send'
EXEC sp_OAGetProperty @obj,'responsetext',@response OUT
SELECT @response [response]
EXEC sp_OADestroy @obj
RETURN

If in the event you have parameters you need to pass to the procedure do this:

Param1=’ + @value + '’

When  you call this stored procedure, the output of the method called, the HTML of the page, will be returned to you, looking something like this:

<?xml version="1.0" encoding="utf-8"?>  <string xmlns="http://tempuri.org/">COMPLETE</string>

Should your permissions not have been configured properly or there is an error in the web service itself, it will also let you know.

Finally let me emphasize again, this is NOT the optimal way to do this. You should be using a WCF service, using SQL CLR and not monkeying about with your permissions. But this is available if you need it.

,

Leave a comment

SQL Object Definition Keyword Search Stored Procedure

There are many times when you encounter a SQL database that has had a lot of fingers in it but not a lot of guidance for you to follow, that you need to find a given concept or word to find what objects you need to be looking at in a given situation. The stored procedure below will allow you to do that. It searches both parent and child objects definition for words that are like or equal to your search term and returns them in order of rank, and then in order of object creation using their object id.


USE [YOURDB]
GO

/****** Object: StoredProcedure [dbo].[Object_Dependency_Keyword_SP] Script Date: 1/3/2018 3:26:16 PM ******/
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

-- =============================================

-- =============================================
CREATE PROCEDURE [dbo].[Object_Dependency_Keyword_SP]
@DefSearchTerm AS varchar(50)
AS
BEGIN
SET NOCOUNT ON;
IF 1=0 BEGIN
SET FMTONLY OFF
END

-- step 1 create temp ta
--DROP TABLE #tempdep
CREATE TABLE #tempdep (objid int NOT NULL, objtype smallint NOT NULL)

-- step 2 load temp table
INSERT INTO #tempdep
SELECT
tbl.object_id AS [ID],
3
FROM
sys.tables AS tbl
--WHERE
--(tbl.name=N'Employee' and SCHEMA_NAME(tbl.schema_id)=N'HumanResources')

-- step 3 find dependencies
declare @find_referencing_objects int
set @find_referencing_objects = 1
-- parameters:
-- 1. create table #tempdep (objid int NOT NULL, objtype smallint NOT NULL)
-- contains source objects
-- 2. @find_referencing_objects defines ordering
-- 1 order for drop
-- 0 order for script

declare @must_set_nocount_off bit
set @must_set_nocount_off = 0

IF @@OPTIONS & 512 = 0
set @must_set_nocount_off = 1
set nocount on

declare @u int
declare @udf int
declare @v int
declare @sp int
declare @def int
declare @rule int
declare @tr int
declare @uda int
declare @uddt int
declare @xml int
declare @udt int
declare @assm int
declare @part_sch int
declare @part_func int
declare @synonym int

set @u = 3
set @udf = 0
set @v = 2
set @sp = 4
set @def = 6
set @rule = 7
set @tr = 8
set @uda = 11
set @synonym = 12
--above 100 -> not in sys.objects
set @uddt = 101
set @xml = 102
set @udt = 103
set @assm = 1000
set @part_sch = 201
set @part_func = 202

/*
* Create #t1 as temp object holding areas. Columns are:
* object_id - temp object id
* object_type - temp object type
* relative_id - parent or child object id
* relative_type - parent or child object type
* rank - NULL means dependencies not yet evaluated, else nonNULL.
* soft_link - this row should not be used to compute ordering among objects
* object_name - name of the temp object
* object_schema - name the temp object's schema (if any)
* relative_name - name of the relative object
* relative_schema - name of the relative object's schema (if any)
* degree - the number of relatives that the object has, will be used for computing the rank
* object_key - surrogate key that combines object_id and object_type
* relative_key - surrogate key that combines relative_id and relative_type
*/
-- DROP TABLE #t1
create table #t1(
object_id int NULL,
object_type smallint NULL,
relative_id int NULL,
relative_type smallint NULL,
rank smallint NULL,
soft_link bit NULL,
object_name sysname NULL,
object_schema sysname NULL,
relative_name sysname NULL,
relative_schema sysname NULL,
degree int NULL,
object_key bigint NULL,
relative_key bigint NULL
)

create unique clustered index i1 on #t1(object_id, object_type, relative_id, relative_type) with IGNORE_DUP_KEY

declare @iter_no int
set @iter_no = 1

declare @rows int
set @rows = 1

declare @rowcount_ck int
set @rowcount_ck = 0

insert #t1 (relative_id, relative_type, rank)
select l.objid, l.objtype, @iter_no from #tempdep l

while @rows > 0
begin
set @rows = 0
if( 1 = @find_referencing_objects )
begin
--tables that reference uddts or udts (parameters that reference types are in sql_dependencies )
insert #t1 (object_id, object_type, relative_id, relative_type, rank)
select t.relative_id, t.relative_type, c.object_id, @u, @iter_no + 1
from #t1 as t
join sys.columns as c on c.user_type_id = t.relative_id
join sys.tables as tbl on tbl.object_id = c.object_id -- eliminate views
where @iter_no = t.rank and (t.relative_type=@uddt OR t.relative_type=@udt)
set @rows = @rows + @@rowcount

--tables that reference defaults ( only default objects )
insert #t1 (object_id, object_type, relative_id, relative_type, rank)
select t.relative_id, t.relative_type, clmns.object_id, @u, @iter_no + 1
from #t1 as t
join sys.columns as clmns on clmns.default_object_id = t.relative_id
join sys.objects as o on o.object_id = t.relative_id and 0 = isnull(o.parent_object_id, 0)
where @iter_no = t.rank and t.relative_type = @def
set @rows = @rows + @@rowcount

--types that reference defaults ( only default objects )
insert #t1 (object_id, object_type, relative_id, relative_type, rank)
select t.relative_id, t.relative_type, tp.user_type_id, @uddt, @iter_no + 1
from #t1 as t
join sys.types as tp on tp.default_object_id = t.relative_id
join sys.objects as o on o.object_id = t.relative_id and 0 = isnull(o.parent_object_id, 0)
where @iter_no = t.rank and t.relative_type = @def
set @rows = @rows + @@rowcount

--tables that reference rules
insert #t1 (object_id, object_type, relative_id, relative_type, rank)
select t.relative_id, t.relative_type, clmns.object_id, @u, @iter_no + 1
from #t1 as t
join sys.columns as clmns on clmns.rule_object_id = t.relative_id
where @iter_no = t.rank and t.relative_type = @rule
set @rows = @rows + @@rowcount

--types that reference rules
insert #t1 (object_id, object_type, relative_id, relative_type, rank)
select t.relative_id, t.relative_type, tp.user_type_id, @uddt, @iter_no + 1
from #t1 as t
join sys.types as tp on tp.rule_object_id = t.relative_id
where @iter_no = t.rank and t.relative_type = @rule
set @rows = @rows + @@rowcount

--tables that reference XmlSchemaCollections
insert #t1 (object_id, object_type, relative_id, relative_type, rank)
select t.relative_id, t.relative_type, c.object_id, @u, @iter_no + 1
from #t1 as t
join sys.columns as c on c.xml_collection_id = t.relative_id
join sys.tables as tbl on tbl.object_id = c.object_id -- eliminate views
where @iter_no = t.rank and t.relative_type = @xml
set @rows = @rows + @@rowcount

--procedures that reference XmlSchemaCollections
insert #t1 (object_id, object_type, relative_id, relative_type, rank)
select t.relative_id, t.relative_type, c.object_id, case when o.type in ( 'P', 'RF', 'PC' ) then @sp else @udf end, @iter_no + 1
from #t1 as t
join sys.parameters as c on c.xml_collection_id = t.relative_id
join sys.objects as o on o.object_id = c.object_id
where @iter_no = t.rank and t.relative_type = @xml
set @rows = @rows + @@rowcount

--udf, sp, uda, trigger all that reference assembly
insert #t1 (object_id, object_type, relative_id, relative_type, rank)
select t.relative_id, t.relative_type, am.object_id, (case o.type when 'AF' then @uda when 'PC' then @sp when 'FS' then @udf when 'FT' then @udf
when 'TA' then @tr else @udf end), @iter_no + 1
from #t1 as t
join sys.assembly_modules as am on am.assembly_id = t.relative_id
join sys.objects as o on am.object_id = o.object_id
where @iter_no = t.rank and t.relative_type = @assm
set @rows = @rows + @@rowcount

-- CLR udf, sp, uda that reference udt
insert #t1 (object_id, object_type, relative_id, relative_type, rank)
select distinct t.relative_id,
t.relative_type,
am.object_id,
(case o.type
when 'AF' then @uda
when 'PC' then @sp
when 'FS' then @udf
when 'FT' then @udf
when 'TA' then @tr
else @udf end),
@iter_no + 1
from #t1 as t
join sys.parameters as sp on sp.user_type_id = t.relative_id
join sys.assembly_modules as am on sp.object_id = am.object_id
join sys.objects as o on sp.object_id = o.object_id
where @iter_no = t.rank and t.relative_type = @udt
set @rows = @rows + @@rowcount

--udt that reference assembly
insert #t1 (object_id, object_type, relative_id, relative_type, rank)
select t.relative_id, t.relative_type, at.user_type_id, @udt, @iter_no + 1
from #t1 as t
join sys.assembly_types as at on at.assembly_id = t.relative_id
where @iter_no = t.rank and t.relative_type = @assm
set @rows = @rows + @@rowcount

--assembly that reference assembly
insert #t1 (object_id, object_type, relative_id, relative_type, rank)
select t.relative_id, t.relative_type, ar.assembly_id, @assm, @iter_no + 1
from #t1 as t
join sys.assembly_references as ar on ar.referenced_assembly_id = t.relative_id
where @iter_no = t.rank and t.relative_type = @assm
set @rows = @rows + @@rowcount

--table references table
insert #t1 (object_id, object_type, relative_id, relative_type, rank)
select t.relative_id, t.relative_type, fk.parent_object_id, @u, @iter_no + 1
from #t1 as t
join sys.foreign_keys as fk on fk.referenced_object_id = t.relative_id
where @iter_no = t.rank and t.relative_type = @u
set @rows = @rows + @@rowcount

--table,view references partition scheme
insert #t1 (object_id, object_type, relative_id, relative_type, rank)
select t.relative_id, t.relative_type, idx.object_id, case o.type when 'V' then @v else @u end, @iter_no + 1
from #t1 as t
join sys.indexes as idx on idx.data_space_id = t.relative_id
join sys.objects as o on o.object_id = idx.object_id
where @iter_no = t.rank and t.relative_type = @part_sch
set @rows = @rows + @@rowcount

--partition scheme references partition function
insert #t1 (object_id, object_type, relative_id, relative_type, rank)
select t.relative_id, t.relative_type, ps.data_space_id, @part_sch, @iter_no + 1
from #t1 as t
join sys.partition_schemes as ps on ps.function_id = t.relative_id
where @iter_no = t.rank and t.relative_type = @part_func
set @rows = @rows + @@rowcount

--view, procedure references table, view, procedure
--procedure references type
--table(check) references procedure
--trigger references table, procedure
insert #t1 (object_id, object_type, relative_id, relative_type, rank)
select t.relative_id, t.relative_type, case when 'C' = obj.type then obj.parent_object_id else dp.object_id end,
case when obj.type in ('U', 'C') then @u when 'V' = obj.type then @v when 'TR' = obj.type then @tr
when obj.type in ( 'P', 'RF', 'PC' ) then @sp
when obj.type in ( 'TF', 'FN', 'IF', 'FS', 'FT' ) then @udf
end, @iter_no + 1
from #t1 as t
join sys.sql_dependencies as dp on
-- reference table, view procedure
( class < 2 and dp.referenced_major_id = t.relative_id and t.relative_type in ( @u, @v, @sp, @udf) )
--reference type
or ( 2 = class and dp.referenced_major_id = t.relative_id and t.relative_type in (@uddt, @udt))
--reference xml namespace ( not supported by server right now )
--or ( 3 = class and dp.referenced_major_id = t.relative_id and @xml = t.relative_type )
join sys.objects as obj on obj.object_id = dp.object_id and obj.type in ( 'U', 'V', 'P', 'RF', 'PC', 'TR', 'TF', 'FN', 'IF', 'FS', 'FT', 'C')
where @iter_no = t.rank
set @rows = @rows + @@rowcount

end -- 1 = @find_referencing_objects
else
begin -- find referenced objects
--check references table
insert #t1 (object_id, object_type, relative_id, relative_type, rank)
select t.relative_id, t.relative_type, dp.object_id, 77 /*place holder for check*/, @iter_no
from #t1 as t
join sys.sql_dependencies as dp on
-- reference table
class < 2 and dp.referenced_major_id = t.relative_id and t.relative_type = @u
join sys.objects as obj on obj.object_id = dp.object_id and obj.type = 'C'
where @iter_no = t.rank
set @rowcount_ck = @@rowcount

--view, procedure referenced by table, view, procedure
--type referenced by procedure
--check referenced by table
insert #t1 (object_id, object_type, relative_id, relative_type, rank)
select distinct
case when 77 = t.relative_type then obj2.parent_object_id else t.relative_id end, -- object_id
case when 77 = t.relative_type then @u else relative_type end, -- object_type
dp.referenced_major_id, -- relative_id
case -- relative_type
when dp.class < 2 then
case when 'U' = obj.type then @u
when 'V' = obj.type then @v
when 'TR' = obj.type then @tr
when obj.type in ( 'P', 'RF', 'PC' ) then @sp
when obj.type in ( 'TF', 'FN', 'IF', 'FS', 'FT' ) then @udf
when exists (select * from sys.synonyms syn where syn.object_id = dp.referenced_major_id ) then @synonym
end
when dp.class = 2 then (case
when exists (select * from sys.assembly_types sat where sat.user_type_id = dp.referenced_major_id) then @udt
else @uddt
end)
end,
@iter_no + 1
from #t1 as t
join sys.sql_dependencies as dp on
-- reference table, view procedure
( class 0
begin
set @iter_no = @iter_no + 1
end

--defaults referenced by types
insert #t1 (object_id, object_type, relative_id, relative_type, rank)
select t.relative_id, t.relative_type, tp.default_object_id, @def, @iter_no + 1
from #t1 as t
join sys.types as tp on tp.user_type_id = t.relative_id and tp.default_object_id > 0
join sys.objects as o on o.object_id = tp.default_object_id and 0 = isnull(o.parent_object_id, 0)
where t.relative_type = @uddt

--defaults referenced by tables( only default objects )
insert #t1 (object_id, object_type, relative_id, relative_type, rank)
select t.relative_id, t.relative_type, clmns.default_object_id, @def, @iter_no + 1
from #t1 as t
join sys.columns as clmns on clmns.object_id = t.relative_id
join sys.objects as o on o.object_id = clmns.default_object_id and 0 = isnull(o.parent_object_id, 0)
where t.relative_type = @u

--rules referenced by types
insert #t1 (object_id, object_type, relative_id, relative_type, rank)
select t.relative_id, t.relative_type, tp.rule_object_id, @rule, @iter_no + 1
from #t1 as t
join sys.types as tp on tp.user_type_id = t.relative_id and tp.rule_object_id > 0
where t.relative_type = @uddt

--rules referenced by tables
insert #t1 (object_id, object_type, relative_id, relative_type, rank)
select t.relative_id, t.relative_type, clmns.rule_object_id, @rule, @iter_no + 1
from #t1 as t
join sys.columns as clmns on clmns.object_id = t.relative_id and clmns.rule_object_id > 0
where t.relative_type = @u

--XmlSchemaCollections referenced by table
insert #t1 (object_id, object_type, relative_id, relative_type, rank)
select t.relative_id, t.relative_type, c.xml_collection_id, @xml, @iter_no + 1
from #t1 as t
join sys.columns as c on c.object_id = t.relative_id and c.xml_collection_id > 0
where t.relative_type = @u

--XmlSchemaCollections referenced by procedures
insert #t1 (object_id, object_type, relative_id, relative_type, rank)
select t.relative_id, t.relative_type, c.xml_collection_id, @xml, @iter_no + 1
from #t1 as t
join sys.parameters as c on c.object_id = t.relative_id and c.xml_collection_id > 0
where t.relative_type in ( @sp, @udf)

--partition scheme referenced by table,view
insert #t1 (object_id, object_type, relative_id, relative_type, rank)
select t.relative_id, t.relative_type, ps.data_space_id, @part_sch, @iter_no + 1
from #t1 as t
join sys.indexes as idx on idx.object_id = t.relative_id
join sys.partition_schemes as ps on ps.data_space_id = idx.data_space_id
where t.relative_type in (@u, @v)

--partition function referenced by partition scheme
insert #t1 (object_id, object_type, relative_id, relative_type, rank)
select t.relative_id, t.relative_type, ps.function_id, @part_func, @iter_no + 1
from #t1 as t
join sys.partition_schemes as ps on ps.data_space_id = t.relative_id
where t.relative_type = @part_sch

end

--cleanup circular references
delete #t1 where object_id = relative_id and object_type=relative_type

--allow circular dependencies by cuting one of the branches
--mark as soft links dependencies between tables or table depending on udf
-- at script time we will need to take care to script fks and checks separately
update #t1 set soft_link = 1 where ( object_type = @u and relative_type = @u ) or
( 0 = @find_referencing_objects and object_type = @u and relative_type = @udf ) or
( 1 = @find_referencing_objects and relative_type = @u and object_type = @udf )

--add independent objects first in the list
insert #t1 ( object_id, object_type, rank)
select t.relative_id, t.relative_type, 1 from #t1 t where t.relative_id not in ( select t2.object_id from #t1 t2 where not t2.object_id is null )

--delete initial objects
delete #t1 where object_id is null

-- compute the surrogate keys to make sorting easier
update #t1 set object_key = object_id + convert(bigint, 0xfFFFFFFF) * object_type
update #t1 set relative_key = relative_id + convert(bigint, 0xfFFFFFFF) * relative_type

create index index_key on #t1 (object_key, relative_key)

update #t1 set rank = 0
-- computing the degree of the nodes
update #t1 set degree = (
select count(*)
from #t1 t_alias
where t_alias.object_key = #t1.object_key and
t_alias.relative_id is not null and
t_alias.soft_link is null)

-- perform topological sorting
set @iter_no=1
while 1=1
begin
update #t1 set rank=@iter_no where degree=0
-- end the loop if no more rows left to process
if (@@rowcount=0) break
update #t1 set degree=NULL where rank = @iter_no

update #t1 set degree = (
select count(*)
from #t1 t_alias
where t_alias.object_key = #t1.object_key and
t_alias.relative_key is not null and
t_alias.relative_key in (select t_alias2.object_key from #t1 t_alias2 where t_alias2.rank=0 and t_alias2.soft_link is null) and
t_alias.rank=0 and t_alias.soft_link is null)
where degree is not null

set @iter_no=@iter_no+1
end

--add name schema
update #t1 set object_name = o.name, object_schema = schema_name(o.schema_id)
from sys.objects AS o
where o.object_id = #t1.object_id and object_type in ( @u, @udf, @v, @sp, @def, @rule, @uda)

update #t1 set relative_type = case op.type when 'V' then @v else @u end, object_name = o.name, object_schema = schema_name(o.schema_id), relative_name =
op.name, relative_schema = schema_name(op.schema_id)
from sys.objects AS o
LEFT OUTER join sys.objects AS op on op.object_id = o.parent_object_id
--join sys.objects AS op on op.object_id = o.object_id
where o.object_id = #t1.object_id and object_type = @tr

update #t1 set object_name = t.name, object_schema = schema_name(t.schema_id)
from sys.types AS t
where t.user_type_id = #t1.object_id and object_type in ( @uddt, @udt )

update #t1 set object_name = x.name, object_schema = schema_name(x.schema_id)
from sys.xml_schema_collections AS x
where x.xml_collection_id = #t1.object_id and object_type = @xml

update #t1 set object_name = p.name, object_schema = null
from sys.partition_schemes AS p
where p.data_space_id = #t1.object_id and object_type = @part_sch

update #t1 set object_name = p.name, object_schema = null
from sys.partition_functions AS p
where p.function_id = #t1.object_id and object_type = @part_func

update #t1 set object_name = a.name, object_schema = null
from sys.assemblies AS a
where a.assembly_id = #t1.object_id and object_type = @assm

update #t1 set object_name = syn.name, object_schema = schema_name(syn.schema_id)
from sys.synonyms AS syn
where syn.object_id = #t1.object_id and object_type = @synonym

-- delete objects for which we could not resolve the table name or schema
-- because we may not have enough privileges
delete from #t1
where
object_name is null or
(object_schema is null and object_type not in (@assm, @part_func, @part_sch))

--final select
select a.object_id,
object_name,
--object_type,
ISNULL(c.type_desc,'') AS Object_Type,
--LTRIM(RTRIM(ISNULL(sm.definition,''))) AS PARENT_DEF,
ISNULL(relative_id,'') AS relative_id,
relative_name = ISNULL(b.name,''),
Rank,
--ISNULL(relative_type,'') AS Relative_Type,

--object_schema,

ISNULL(b.type_desc,'') AS Type_Desc--,
--LTRIM(RTRIM(ISNULL(smc.definition,''))) AS CHILD_DEF
from #t1 a
LEFT JOIN sys.objects b ON b.object_id = a.relative_id
LEFT JOIN sys.Objects c ON c.object_id = a.object_id
LEFT JOIN sys.sql_modules sm ON sm.object_id = c.object_id
LEFT JOIN sys.sql_modules smc ON smc.object_id = b.object_id
WHERE (sm.definition LIKE '%' + @DefSearchTerm + '%' OR smc.definition LIKE '%' + @DefSearchTerm + '%')
-- WHERE object_name = 'Organization_all'
order by Rank,relative_id

drop table #t1
drop table #tempdep

IF @must_set_nocount_off > 0
set nocount off

END;
GO


 

Leave a comment

SQL Object (Table, Stored Procedures, Functions, Views) Dependencies

I often get asked to help figure out a confusing and archaic SQL database structure as various developers over the years have added layer after layer of stored procedures, tables, views and functions and nobody in the present day actually knows what a lot of it is or does anymore and folks are paralyzed because they are afraid to make any kinds of changes. However, not all is lost! Using the sample below you can see how your stored procedures, tables, views and functions are connected to each other and what you will need to address and what you need to thoroughly test if you make any kind of changes. The format below is for SQL, but I am working on an update for MySQL, DB2 and Oracle as well so check back later for that.

USAGE:

select distinct [Table Name] = o.Name, [Found In] = sp.Name, sp.type
from sys.objects o inner join sys.sql_expression_dependencies sd on o.object_id = sd.referenced_id
inner join sys.objects sp on sd.referencing_id = sp.object_id
and sp.type in ('P','TR', 'FN', 'V', 'TF','IF')

-- where o.name = 'Your Table Name'
--where sp.Name = 'your stored procedure, function or trigger or other type'

--order set to sort by table, object type and object name
order by o.Name, sp.type,sp.name
-- change order to drill down from procedure, trigger or function name or type
--order by sp.name,o.Name, sp.type_desc

--Type Options

-- C CHECK_CONSTRAINT
--D DEFAULT_CONSTRAINT
--F FOREIGN_KEY_CONSTRAINT
--FN SQL_SCALAR_FUNCTION
--IF SQL_INLINE_TABLE_VALUED_FUNCTION
--IT INTERNAL_TABLE
--P SQL_STORED_PROCEDURE
--PK PRIMARY_KEY_CONSTRAINT
--R RULE
--S SYSTEM_TABLE
--SQ SERVICE_QUEUE
--TF SQL_TABLE_VALUED_FUNCTION
--TR SQL_TRIGGER
--U USER_TABLE
--UQ UNIQUE_CONSTRAINT
--V VIEW

Leave a comment

Dynamic Predicate (WHERE clause) for a LINQ Query

I love LINQ! It really has been a godsend and anyone still using datatable.select seriously needs to learn it. One thing I didn’t like too much though was I was having to write each LINQ query out instead of being able to pass it off to a function as a parameter. I haven’t gotten all the way there yet but this was a start. The idea was when we have a source of data that we refer to often that we pass the predicate (WHERE statement) off to a function instead of having to rewrite the whole query each time. While this example uses a datatable, you can use it with any collection that can have LINQ written against it.

So how is it done? First we identify the single element type we need to use ( Of TRow As DataRow) and then identify the “source” we are using and tie the identifier to that source ((source As TypedTableBase(Of TRow)). Then we must specify the predicate, or the WHERE clause that is going to be passed (predicate As Func(Of TRow, Boolean)) which will either be returned as true or false. Then we identify how we want the returned information ordered (OrderByField As String). Our function will then return a EnumerableRowCollection(Of TRow), our collection of datarows that have met the conditions of our predicate(EnumerableRowCollection(Of TRow)). This is a basic example. Of course you must make sure your order field doesn’t contain nulls, or have handled that situation properly and make sure your column names (if you are using a strongly typed datasource never mind this, it will rename the columns for you) are standard.

What is left // TO DO is to accomplish this with a table join, which is what my next step will be. It won’t be hard. Hopefully this code helps you and I hope you all have a great Christmas!

VB

USAGE:

 Private Sub Form1_Load(sender As Object, e As EventArgs) Handles MyBase.Load
        Dim da As New DataSet1TableAdapters.OrdersTableAdapter
        da.Fill(ds.Orders)
        Dim MyRet = LINQ_Where(ds.Orders, Function(row As DataSet1.OrdersRow) row.Order_ID < 200 And row.Order_ID > 1, "Order ID")
        If MyRet Is Nothing Then
            MessageBox.Show("NO ROWS")
        Else
            DataGridView1.DataSource = MyRet.CopyToDataTable
        End If
    End Sub

 

 Function LINQ_Where(Of TRow As DataRow)(source As TypedTableBase(Of TRow), predicate As Func(Of TRow, Boolean), OrderByField As String) As EnumerableRowCollection(Of TRow)

        Try

            Dim ReturnedRows = From row In source
                               Where predicate(row)
                               Order By row.Item(OrderByField)
                               Select row
            If ReturnedRows Is DBNull.Value Then
                Return Nothing
            Else
                Return ReturnedRows
            End If
            If ReturnedRows.Any = True Then
                Return ReturnedRows
            Else
                Return Nothing
            End If

        Catch ex As Exception
            Return Nothing
        End Try

    End Function

C#

 

USAGE:

private void Form1_Load(object sender, EventArgs e)
{
        DataSet1TableAdapters.OrdersTableAdapter da = new DataSet1TableAdapters.OrdersTableAdapter();
        da.Fill(ds.Orders);
        var MyRet = LINQ_Where(ds.Orders, (DataSet1.OrdersRow row) => row.Order_ID < 200 && row.Order_ID > 1, "Order ID");
        if (MyRet == null)
        {
            MessageBox.Show("NO ROWS");
        }
        else
        {
            DataGridView1.DataSource = MyRet.CopyToDataTable;
        }
    }

 

public EnumerableRowCollection<TRow> LINQ_Where<TRow>(TypedTableBase<TRow> source, Func<TRow, bool> predicate, string OrderByField) where TRow: DataRow
{

        try
        {

            var ReturnedRows = from row in source
                where predicate(row)
                orderby row.Item(OrderByField)
                select row;
            if (ReturnedRows == DBNull.Value)
            {
                return null;
            }
            else
            {
                return ReturnedRows;
            }
            if (ReturnedRows.Any() == true)
            {
                return ReturnedRows;
            }
            else
            {
                return null;
            }

        }
        catch (Exception ex)
        {
            return null;
        }

    }

Leave a comment

Northwind Database For DB2

Well if I thought Oracle was a mess getting the Northwind database to I hadn’t seen anything yet. I don’t care for the user tools at all in IBM Data Studio. The mapping to get SSIS to push a copy of my SQL based Northwind database was all over the map. If you don’t know, Northwind is the training tool of choice for many developers, businesses and consultants.

So again, I am doing you a solid. Linked here is the SSIS package you can pick up and use to transfer your SQL based version of Northwind to DB2. I also included the DB2 ddl for the database for your reference. I haven’t found an easy way to do it from the Access version. Sorry folks.

A couple things. You will need to create the database on your DB2 instance like below. use your standard command prompt with admin privileges.

create database NORTHWIN using codeset UTF-8 territory en

You may notice I left off the D. Yes I did. DB2 prefers its databases to be 8 characters or less. So that is why

Get the DTSX package here.

Once you have it open in Notepad and edit these lines with your information:

DTS:ConnectionString=”Data Source=DESKTOP-P1TI349;Initial Catalog=Northwind;Provider=SQLOLEDB;Integrated Security=SSPI;Auto Translate=false;” />

DTS:ConnectionString=”Data Source=NORTHWIN;User ID=db2admin;Provider=IBMOLEDB.DB2COPY1;Persist Security Info=True;Location=DESKTOP-P1TI349:50000;Extended Properties=&quot;&quot;;”>

Find/Replace DB2COPY1 with the name of your instance.

Save the file and open in SSIS.

And then you should be good to go…. if you have changes in your SQL Northwind database that aren’t reflected here (it will prompt you) you will need to use SSIS to transfer the data. Be sure for your destination you choose the instance name of your DB2 instance. (Mine was DB2COPY1).

That’s about it. Have a good day….

Leave a comment

Northwind Database For Oracle

Ah Northwind. The database everybody learns on. Well almost everyone. I realize it is a Microsoft product but you would have thought somebody along the line would have developed the same for Oracle and MySQL.

Why did this come up? In the next rendition of MySQLMove (which is being renamed for the next release – I still haven’t settled on a title for it yet. Something that indicates it is multi environment data collector and reporter. If you have suggestions or ideas please let me know…) I have been telling you about I wanted to add Oracle back ends as an option to get your data from. During the examples on past releases I used Northwind to demonstrate how to use the software. So I looked high and low on the web and didn’t find a single useable version of Northwind for Oracle. So I did it the old fashioned way. I exported my Northwind from SQL, table by table to excel files and imported them into Oracle. There’s an hour of my life I won’t get back. So just in case any one else has to go through this I pulled all the Oracle scripts for creating the tables and inserting the data, zipped them up and have made them available to you for download. All you have to do is download it, unzip it and run the scripts. Why? Because I am a nice guy that way and I had the time.  If I saved anyone some time, it was worth it.

, , ,

5 Comments

MySQLMove PivotWizard Adds SQL,CSV, MS Access Support and Chart Creation

If you haven’t been following along, MySQLMove is a tool that started out as a way for MySQL developers to do things the free tools to manage MySQL did not manage well. One of these things was the PivotWizard, which originally allowed MySQL developers, who do not have access to a pivot related function, to accomplish so here. As time went on, it became the most popular part of this software. The early versions of the PivotWizard were crude and did just the basics of pivoting data. I first added SQL support. Now I have added Microsoft Access and CSV file support, in addition to MySQL and SQL Server Database support and also have given you the ability to choose from 50+ different chart types to be able to look at your data in different ways. A lot has changed so I am going to go through it again, step by step.

You can download it here. Be sure to remove the old version if you installed it as the default MS install package can be buggy that way about removing it. Below is the directions on how to use the PivotWizard.

When you open MySQLMove you will see the following:

image

Click on the “Pivot Wizard”.

You will then see the screen to allow you to specify your connection type and parameters. If you choose MS Access you can use mdb or accdb file types and can specify if you use a password for it not. If you choose MS Access or CSV, a file dialog box will appear and ask  you to browse to the path your file is located at. If you choose CSV, it MUST be a comma delimited file and you should have a header row to define the columns. (Incidentally I should note that as long as your CSV files are located inside the same directory (folder) you specified in your connection you can join separate files in a query just as if you were joining two SQL tables like so:

SELECT *FROM [Orders.csv] a
INNER JOIN [Employees.csv] b on b.[ID] = a.[Employee ID]

No need for tools like PowerShell, other applications or confusing syntax.)

If you choose MySQL, be sure to precise on your casing of words. For this example, we will check the box for SQL Server and fill in its connection parameters.

image

I suggest you click on the Test button and verify all is well. I will check it for you before you leave the following screen but may as well get that out of the way now. If you get any of this information incorrect, you can’t proceed until it is fixed.

Once this is done, click the “Next” button.

image

You will then be asked to type or copy and paste a query based on this connection into the textbox. If you chose CSV format in the earlier screen this information will be filled in for you. After  you enter the query, you must click the “Parse” button. This will verify your connection is correct and your statement compiles. If it does, you will be told via message box how many columns you have to work with and then you will click “Next” to proceed. If it does not compile correctly, you will be informed and you will have to fix the statement to be able to proceed. (Please don’t email me about those kinds of issues. I really don’t have time for that.). You might also see a prompt letting you know it compiled with errors. As long as the issues aren’t fatal, you will be allowed to proceed and a report will be given to you letting you know what the problems were. Below the query box, you are asked to identify what you want Excel to call your report. Since this is the raw data, I called it “Raw Data”. This field is not mandatory. You might also notice the “?” mark icon. On the previous version, I attempted to have you directed to links to the MySQL Facebook group to answer common questions. That was a miserable failure. Instead in this edition, I have placed a tooltip that when you hover over the icon, it gives an explanation of what is going on.

Once this step is completed click the “Next” button.

image

Here you will pick what columns you will want to appear as the rows in your pivot report. You can select multiple columns as rows, just be sure they are related somehow. Once selected you are allowed to choose a name for the field. Excel will use that name instead of the field name in your report. Click the “Next” button to continue.

image

You will then be asked to choose “data” for your report. In this case I have chosen to see how many Order_IDs exist for each salesman (see the last screen). Use the arrows to choose your fields (you can have more then one, and you can reuse the same field if need be.) and select a grouping to be applied. Click the “Next” button to continue.

image

You will then be asked to choose the columns you want to represent in your pivot report and how you want the data grouped. If  you notice we choose to see Order_Date twice in our columns because we want it grouped by Quarter and inside that quarter, by Day of Week. I add a column to your original query with the same name and data with a “_1” after it. So now we have asked to see how many orders our salesman did, divided into quarter and inside that quarter by Day of Week. Play with it, you will get the hang of it. Click “Next” to continue.

image

Here you choose the type of chart to represent the data you have requested and in Chart Title have given Excel the information it needs to call your sheet and chart name what you want. Hopefully the images of the charts you see will assist you in making the choice of charts that you want. Click the “Next” button to continue.

image

Finally, at the end. Now you can give your pivot report a name for Excel to use for your pivoted data (or not if you don’t care) and then click the “Finish” button. Be patient, it takes a few second for Excel to show you your work. Just as a word of caution, I tried to anticipate anything you could do wrong in naming your charts and reports. If something you should happen, you will get a notice of what the problem is but it will allow you to still see what you have produced. It just will not give the worksheet name you tried to give it.

You can see the sample we produced here.

The next version of this software will allow you to save templates of different reports and allow you to create more then just one query. The next version will allow you to store multiple connections, multiple querys and pivot reports and charts specific to each query, each produced into the same Excel Workbook. Keep an eye on here and on the MySQLMove Facebook page for updates.Again you can download the install package for MySQLMove here.

Well that about covers it. Well not quite. I have been asked if I would be willing to do a custom version of this for a specific person or company. The answer is of course, yes. Also if you have bugs you have seen please let me know. Just email me at kellyjmartens@hotmail.com and I would be happy to help.

, , , , ,

1 Comment

MySQLMove 1.2 Adds PivotWizard…Works with SQL Too..

In addition to the lack of effective database migration components  in the free software management tools for use in MySQL, there was another piece missing that SQL Server developers have had forever. That is the lack of a PIVOT function. Yes with various hacks you can achieve the same idea in your MySQL query but never to the power that SQL gives us. So since MySQLMove was intended to offer some easy to use tools to achieve what is lacking there, I decided to add pivot functionality for data in MySQL to the toolbox here. I also decided to make it available for use with SQL Server. You may ask why I did that since it is an easy matter to use the PIVOT function there or some of the other excellent tools afforded us in SQL Server Management Studio, SSRS and Excel. The reason for that is this. While experienced developers are very happy to use these tools, some less so experienced and the average person wanted a tool that didn’t seem so ….. huge to them. The idea of drag and drop and seeing all these options put out there that have a learning curve to them intimidates and makes them not want to use tools provided there. Plus the fact, not everyone is comfortable with visually based environments. It also does not hurt that as for me, a primarily SQL Server based back end guy, I can use or teach this tool to be used in minutes.

So lets talk about the MySQLMove Pivot Wizard for a bit…..

First you must get the installation media here…. If you have installed a version of MySQLMove previously, you might want to uninstall it first. The Microsoft provided template installer is buggy when it comes to removing previous versions of anyway. But here is the location of the installation file.

After install, double click the icon MySQLMove on your desktop and you will see:image

Click the “Pivot Wizard” button.

 

image

It is pretty straightforward. Keep in mind if you are using non Windows based MySQL Servers the case of these properties are a bigger deal then some Windows based systems. Notice the checkboxes on the bottom to choose your server type. You have to choose “MySQL” or “SQL”.

Also note the “?” button. If you need help anytime during the pivot wizard, you can click on these buttons to get that. I am trying something different here. Basically I am trying out Facebook for use as a tool to get end users support. When you click on these blue buttons, it will take you to a specific post on a Facebook group called “MySQLMove Help”. Ok you don’t have to be in awe of the such original naming lol. You don’t need to join to use for the help buttons to work. In fact, I don’t know why you would. I have disabled comments so it isn’t a discussion forum. A software developer who isn’t overly social. There’s a shock right?

After your login information has been accepted you will see this:

image

After you enter your statement, you click the “Parse” button. One of three things will happen. It will either tell you

1. Your statement could not be compiled

2. It compiles correctly and move on to the next screen of the wizard.

3. It compiles correctly but there were issues with the data. MySQL can be finicky this way if  you doing a lot of joins and there are duplicate columns or data that has constraints. It will inform you of the issues, show you a report but if it is not fatal it will allow you to continue. I do this because not every situation we encounter in real life has perfect data.

image

Also know if you are concerned about injection attacks, I have taken measures to ensure that no such thing can occur. I have barred use of several keywords that would be needed to do such. So rest easy on that.

After that is done you encounter the screen that asks you what columns you want to see depicted as rows. You use the arrows in the middle to add or remove columns that will be used as rows. In this example, I have four breakdowns occurring…. the last of the salesman is king seeing it as first. It then breaks it down into customers, categories of products sold and then the actual product.

image

You can use the arrows to the right to move your selected row items to different positions of priority. Once you are happy with what you have you click the Next button.

The next screen is the data headers button. This is the data that will actually be presented in the report. You have several options available to you to choose how to show that data. They are:

Average
The average of the values.
Count
The number of values (excluding Null and DBNull values).
Max
The largest value.
Min
The smallest value.
StdDev
An estimate of the standard deviation of a population, where the sample is a subset of the entire population.
StdDevp
The standard deviation of a population, where the population is all of the data to be summarized.
Sum
The sum of the values.
Var
An estimate of the variance of a population, where the sample is a subset of the entire population.
Varp
The variance of a population, where the population is all of the data to be summarized.

For this example we counting the number of orders that meet our row criteria. You can have up to eight data fields.

image

Once you are satisfied with the data field selection click the Next button.

You then have a screen that asks you what columns do you want to appear in your pivot report and how to group that data in relation to your data field of how many orders were submitted. In this example we have asked for order date and we selected from the drop down to group that information by quarter. You can choose as many columns as your heart desires or the size of your paper you are printing on will allow.

image

Other grouping methods available here are:

Alphabetical
Combines field values into categories according to the character that the values start with.

Date
This option is in effect only for fields that store date/time values.
Field values are grouped by the date part. The time part of the values is ignored.

DateDay
This option is in effect only for fields that store date/time values.
Field values are grouped by the day part. The following groups can be created: 1, 2, 3,…,31.

DateDayOfWeek
This option is in effect only for fields that store date/time values.
Field values are grouped by the days of the week. Examples of such groups: Sunday, Monday, Tuesday (the actual names of the days of the week are determined by the current culture).

DateDayOfYear
This option is in effect only for fields that store date/time values.
Field values are grouped by the number of the day in which they occur in a year. The following groups can be created: 1, 2, 3,…,365 (,366 in a leap year).

DateHour
This option is in effect only for fields that store date/time values.
Field values are grouped by the date part with the hour value. Examples of such groups: 3/4/2012 0:00, 3/4/2012 1:00, 3/4/2012 2:00, …

DateHourMinute
This option is in effect only for fields that store date/time values.
Field values are grouped by the date part with the hour and minute values. Examples of groups: 3/4/2012 0:00, 3/4/2012 0:01, 3/4/2012 0:02, …

DateHourMinuteSecond
This option is in effect only for fields that store date/time values.
Field values are grouped by the date part with the hour, minute and second values. Examples of groups: 3/4/2012 0:00:00, 3/4/2012 0:00:01, 3/4/2012 0:00:02, …

DateMonth
This option is in effect only for fields that store date/time values.
Field values are grouped by the month part. Examples of groups: January, February, March (the actual names of the months are determined by the current culture).

DateMonthYear
This option is in effect only for fields that store date/time values.
Field values are grouped by months and years. Examples of groups: August 2013, September 2014, January 2015, …

DateQuarter
This option is in effect only for fields that store date/time values.
Field values are sorted by the quarterly intervals of the year. The following groups can be created: 1, 2, 3 and 4. Each quarter includes three months.

DateQuarterYear
This option is in effect only for fields that store date/time values.
Field values are grouped by the year and quarter. Examples of groups: Q3 2012, Q4 2012, Q1 2013, Q2 2013, …

DateWeekOfMonth
This option is in effect only for fields that store date/time values.
Field values are grouped by the number of the week in which they occur in a month. The following groups can be created: 1, 2, 3, 4 and 5. The first week is the week containing the 1st day of the month.

DateWeekOfYear
This option is in effect only for fields that store date/time values.
Field values are grouped by the number of the week in a year in which they occur. The following groups can be created: 1, 2, 3,…,52, 53.
Week numbers are calculated based on the following current culture’s settings.

DateYear
This option is in effect only for fields that store date/time values.
Field values are grouped by the year part. Examples of such groups: 2003, 2004, 2005.

DayAge
This option is in effect only for fields that store date/time values. Field values are grouped by the number of full days that have elapsed till the current date.

Default
Groups combine unique field values.

Hour
This option is in effect only for fields that store date/time values.
Field values are grouped by the hour part, regardless of the date to which the current date/time value belongs.

Minute
This option is in effect only for fields that store date/time values.
Field values are grouped by the minute part, regardless of the date to which the current date/time value belongs.

MonthAge
This option is in effect only for fields that store date/time values.
Field values are grouped by the number of full months that have elapsed till the current date.

Numeric
This option is in effect only for fields that store numeric values.
Field values are grouped into intervals.

Second
This option is in effect only for fields that store date/time values.
Field values are grouped by the second part, regardless of the date to which the current date/time value belongs.

WeekAge
This option is in effect only for fields that store date/time values.
Field values are grouped by the number of full weeks that have elapsed till the current date.

YearAge
This option is in effect only for fields that store date/time values.
Field values are grouped by the number of full years that have elapsed till the current date.

You then click the Next button.

You then see a screen that lets you know it is ready to produce your report. You can still at this point go back and change any parameter you have chosen. Click Finish to produce the report.

image

You then see a preview of your report. You can see the rows broken down as we have selected. What I want you to pay attention to here is the various ways to export your report. Obviously if you can’t take it with you this data is no good to you.

image

You choose the format you want and a save dialog will appear. It will then ask you if you want to view it. Go ahead and admire your handiwork.

I have saved the example we created here just in case you want to review it.

Future modifications that I would like to make to this report will be to allow you to save a report setting so you don’t have to redo it each time, give you more appearance options, and integrate a chart wizard type interface that will allow you to build charts based on the data you have produced here. I am also considering building a web interface for the Migration and Pivot Wizard functions.

If you find the tool useful and would like to continue to see it developed, please don’t be a cheap bastard like me  and make a donation here using my email address – kellyjmartens@hotmail.com . Also feel free to send feedback, suggestions, critiques and praise to the same email address. All are read.

Once again here is the download link if you are too lazy to scroll back up and find it….Here is the installation file.

That about covers it. Thank you for reading this and have a great day!

, , , , ,

Leave a comment

MySQLMove – An Easy To Use Tool To Move MySQL Database Objects and Data

Recently, I was asked to work on a project using a MySQL backend after many years of not having worked with the product. I was amazed when I saw that some of the more common tools to work with MySQL, such as MySQL Workbench or Toad, had still not developed an easy way to move objects and/or data between mySQL databases such as we have become accustomed to in SQL Server Management Studio.  Obviously, this greatly complicated my work and caused me to have to work with data in a production environment more often, which obviously I wasn’t comfortable with.

So like most developers, my brain began to think about how to solve this problem. You could of course, go through each object, get the CREATE script, put it in a giant file, and run it that way. That would be time consuming, bulky and difficult to maintain should there be changes to the database objects. And of course, this did nothing for the issue of data migration.

I looked at other products. MySQLDump, a free tool is fine. Of course, it was more complicated then I would have liked. I just want to move the objects and data and not have a high learning curve of a new product. Just get it done already. Others were available that you had to pay for. Being the cheap bastard that I am, that option didn’t appeal to me either.

So I wrote MySQLMove. This started out as a quick and dirty option to get objects and data moved to another database on another server and gradually moved to including things like a report, the option to only migrate objects, and actually put a user interface on top instead of just running a script. The script was fine for me but it may not be for other developers.

Let’s go into some things you need to know or generally be aware of.

First, some MySQL developers might wonder why I did not use INFILE to import data. The reason is for many such developers, they are coding against a shared hosting environment and often the administrators in such environment have disabled this option. Even using the LOCAL keyword presented problems relating to security. However, in future releases (but see below) I am planning on a interface modification that will allow the end user to indicate that INFILE is available for use and it will operate accordingly.

UPDATE:

The engine now uses INFILE and LOCAL keywords. It is also now lightening fast!

Please make sure you have created the database on the destination server. I know that comes from the “duh” department but you would be surprised…. If you are asking yourself the reason why I don’t do it for you, it is because I drop each object on the destination individually as it is being imported. But more importantly, often each database has a specific username and password designated for access. An idea submitted for future versions (but see below) is that we collect all databases under a given username and password and allow multiple databases to be migrated at one time.  That day is not here yet but it intrigues me.

Another “duh” is it is important to make sure the destination database doesn’t have any users writing changes.

Next…. it is important that if you are importing from or to a Linux based environment, that you are absolutely certain of the case of your database, its objects and server name. Windows is much more forgiving then Linux is in this regard. If you are importing from Linux, be prepared for windows to make all objects lower case, regardless of how you created them in Linux based servers. I have also seen some versions of MySQL run on Windows that were also sticklers for such. My best advice to you is to make all objects lower case. Both Linux and Windows based MySQL servers can handle such and if you are often moving between the two systems you will save yourself a lot of trouble.

Next, you will see an option in a checkbox to only import MySQL objects.

image

If you have the time and the data available to you in text or csv format, I suggest you check this. Why? While this engine can and will move data, it takes longer then a manual import would. The way it is built is all of your insert statements for the data are collected as one statement and inserted collectively. It was about 20 percent faster then inserting a row at a time and also if it should bomb, I can rollback the transaction so that you are not left with a table with partial data. That said, if you are one of those folks who build 300+ column tables, (by the way who hurt you as a child to do that to yourself? Smile jk) it is going to take awhile. But rest assured, it will get there. This was tested against a table with 80000 rows and 30 columns. So it is pretty robust.

UPDATE:

As mentioned above I now use INFILE and LOCAL keyword. The above text is no longer a concern.

Next, it is important you let the code finish. Check that, it is imperative. If you are impatient, switch to decaf and try some breathing exercises.

Finally, MySQLMove will attempt to export database objects as follows: tables, stored procedures, functions and triggers. Not every single one of all those objects can be migrated using this tool. One issue that has commonly arisen is when the code uses a PREPARE statement along with a string for use as the command. Future versions (but see below) will probably rectify this. Should something not be exportable to the destination, at the end of the process a report will appear showing what objects weren’t able to be moved. In my testing, it is a rare occurrence. Also with triggers, remember the “defined user” is copied straight from the trigger to the destination. You will need to make sure that user exists on the destination database as well. Oh and by the way, triggers are migrated last. Otherwise each insert might cause that trigger to fire. And yeah that would not be fun.

Finally, I would love to continue developing this tool. There are so many things I could and would love to do with it. Better reports, table optimization, code optimization, object selection etc. Unfortunately that costs me time, which in translation for those that live under a rock, this means it costs me money. If you find the tool useful and would like to continue to see it developed, please don’t be a cheap bastard like me Smile and make a donation here using my email address – kellyjmartens@hotmail.com .

 

paypal-app

I would be extremely grateful. I know the honor system is putting yourself out there isn’t real effective but it gives those of you who would like to say thank you and keep going with this a chance to do so.

So on to the download…. You have two options….

One if you are a person who already has the MySql.Data dll already installed on your system (make sure it is version 6.9.9.0 and you have at least .NET runtime v4.0.30319) you can download just the bin folder. If you don’t or aren’t sure, get the install package zip file called “MySqlMoveSetup”. If you need the .NET Framework you can download it here.

Both the bin folder zip file and the installer zip file are located here on my OneDrive.

Please do send me an email at kellyjmartens@hotmail.com with any bug reports, suggestions or praise. All are accepted. Smile 

 

Have a great day!

, , , ,

3 Comments

Email Excel Spreadsheet as Email Body Issues

Hello all. I had a production manager wanting an excel spreadsheet mailed as the body of the email. As some of you know the code generated by excel to produce the email is pretty crazy. But as a result, it showed up fine in Outlook and Android but it did not show the gridlines on the spreadsheet. So this code is based on the excellent work by Ron DeBruin over at http://www.rondebruin.nl/win/s1/outlook/bmail3.htm . I did a replacement for the HTML Range in this manner and the grid lines did appear. And the manager was happy.

Sub Mail_Selection_Range_Outlook_Body()
‘For Tips see: http://www.rondebruin.nl/win/winmail/Outlook/tips.htm
‘Don’t forget to copy the function RangetoHTML in the module.
‘Working in Excel 2000-2016
    Dim rng As Range
    Dim OutApp As Object
    Dim OutMail As Object
‘MsgBox Cells(5, 9).Value
    Set rng = Nothing
    On Error Resume Next
    ‘Only the visible cells in the selection
    Set rng = Selection.SpecialCells(xlCellTypeVisible)
    ‘You can also use a fixed range if you want
    ‘Set rng = Sheets(“YourSheet”).Range(“D4:D12”).SpecialCells(xlCellTypeVisible)
    On Error GoTo 0

    If rng Is Nothing Then
        MsgBox “The selection is not a range or the sheet is protected” & _
               vbNewLine & “please correct and try again.”, vbOKOnly
        Exit Sub
    End If

    With Application
        .EnableEvents = False
        .ScreenUpdating = False
    End With

    Set OutApp = CreateObject(“Outlook.Application”)
    Set OutMail = OutApp.CreateItem(0)

    On Error Resume Next
    With OutMail
        .BodyFormat = olFormatHTML
        .To = “you@you.com”       
           
               
       
         .CC = “”
        .BCC = “”
        .Subject = “Testing Purchase Order Email To Steve”
        .HTMLBody = RangetoHTML(rng)
        Replace .HTMLBody, “border-left:none”, “border-left:solid;border-width: 1px;border-color:black”
        Replace .HTMLBody, “border-right:none”, “border-right:solid;border-width: 1px;border-color:black”
        Replace .HTMLBody, “border-bottom:none”, “border-bottom:solid;border-width: 1px;border-color:black”
        Replace .HTMLBody, “border-top:none”, “border-bottom:solid;border-width: 1px;border-color:black”
        .Send
         ‘or use .Display
    End With
    On Error GoTo 0

    With Application
        .EnableEvents = True
        .ScreenUpdating = True
    End With

    Set OutMail = Nothing
    Set OutApp = Nothing
End Sub

, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Leave a comment