SQL Saturday #192


John Welch presenting at SQL Saturday #192

John Welch presenting at SQL Saturday #192

Today I have had the great opportunity to attend SQL Saturday #192 in Tampa, FL. This is probably one of the best lineups of speakers I have seen for a SQL Saturday event. With great representation from across the PASS community, if you were here, you had the opportunity to attend great sessions by people like Buck Woody (@BuckWoody), John Welch (@John_Welch), Jose Chinchilla (@SQLJoe), Jonathan Kehayias (@SQLPoolboy), and many other MVP’s from the SQL Server community. If you were in the Tampa area and you didn’t make it, I’m sorry but you should have. If you were here, then you know exactly what I’m talking about.

I personally had the great opportunity to attend a few sessions by some of these people and want to take the opportunity highlight a few of them.

I started my morning off with Jose Chinchilla presenting a session that introduces you to Self Service BI using PowerPivot and Power View (mind the gap). This was a great session to show you how Microsoft is going about putting access to BI in to the users hands in an effective way. If your eyes crossed when I said Self Service BI, then you need to get on the bus before you get left behind.

Like everyone else in the IT world, I know Big Data is the next big thing and is moving forward quickly. With that in mind, I wanted to make sure I hit at least one Big Data session today. With my very basic understanding of Hadoop, I chose John Welch‘s session called “Moving Big Data – SSIS, Pig, and Sqoop”. This session gave you a great high level understanding of the different layers of what a basic Hadoop setup would entail. He goes over some of the “animals” of Big Data, along with their role.

I also took a step outside of the BI world to attend Jonathan Kehayias‘s session on “Performance Tuning by Digging in to the Plan Cache”. I have to say, if you have not attended one of his sessions, your missing out on an opportunity to learn from one of the best resources in the community. Jonathan’s session on the Plan Cache has pointed me in the right direction to hopefully find the source of, and hopefully fix, some of the issues I am fighting in my current environment.

Again, these weren’t all the best sessions at this event just a few that I personally attended and highlighted. If you have an opportunity to attend an event like this and you don’t, then I feel bad for you. You are missing out on a great opportunity to learn from some of the best in our industry, while also getting a great opportunity to network with them, and others who attend the event. If you go and you feel to shy to walk around and introduce yourself to people, start with me. You’ve read my blog, so I feel like we know each other. Start here and we’ll work our way around the room.

Deploying Database changes to multiple databases Part 2: Powershell


Here we are again. A little longer since Part 1 than I wanted to be, but I’ll do my best to make it worthwhile. When we spoke last, I said there were certain times when a TSQL cursor won’t work because of the type of command you are looking to run, like when you are trying to create a Stored Procedure in multiple databases. For cases like this, I recommend either using powershell or the option I will discuss in Part 3, SSIS.

With Powershell, I have created this script, called Query-Multiple, and added a lot of comments to help you to understand the thought process of why it is written the way it is and where you can add/remove your own logic. I’ve kept this very simple, so that it can help any DBA, who is novice with Powershell, to begin to use Powershell to help apply changes to multiple databases. When you run this script, it needs you to pass the path to the script that you want to run.

Please take a look at the script and let me know any questions or recommendations that you have. I’m very interested to hear what everyone thinks.

Deploying database changes to multiple databases Part 1: Cursors


As a DBA, no matter where you work, there will come a time that you will need to deploy changes to multiple databases at a time. If you are lucky, it will be the same change, multiple times.

For me, I work for a software company where we host our clients data in separate databases. So when there is a code change that affects the database, these changes need to be propagated to multiple databases. For some of you reading this, you may have the time and patience to go through and run these scripts against each and every database. For the rest of us, I have a couple simple ways to push out changes that will hopefully be helpful to you, like they were me.

One way is with the old reliable cursor. If you’re not familiar with how to write a cursor for iterating through multiple databases, or at all, just follow along.

In this example, we host databases for car dealerships, and we want to add a column called Color to the Cars table in every one of our client databases. We have a table, clients, in our Dealership database, that lists all of our clients and the name of their database. First thing we want to do is to write our query for adding our column to the table. Just to keep it simple, we will make color a free text, or nvarchar column for this example.

So we come up with this script:

ALTER TABLE CARS ADD [Color] NVARCHAR(50) NOT NULL

Now that we know what we want to do to each database, we need to come up with a query to pull that list of databases that we need to update. For our example, we will use the following SELECT statement:

SELECT DBName FROM Clients

Since we will have a “USE” statement in our script to switch from database to database, we will use a string variable that we will set to our script that starts with the use statement and execute that. At the beginning of the script, we will declare our variable that we will use for our database name, sql statement, and the variable for the cursor itself. Here is our cursor for our example:

DECLARE DBName VARCHAR(50)
DECLARE SQL VARCHAR(250)

DECLARE dbnames CURSOR FOR
SELECT DBName FROM Clients

OPEN dbnames
FETCH NEXT FROM dbnames INTO @DBName

WHILE @@FETCH_STATUS = 0

BEGIN

SET @SQL = ‘USE ‘ + @DBName + ‘
ALTER TABLE CARS ADD [Color] NVARCHAR(50) NOT NULL’

PRINT(@SQL)
EXEC(@SQL)

FETCH NEXT FROM dbnames INTO @DBName

END

CLOSE dbnames
DEALLOCATE dbnames

This has been an easy tool for me to deploy database changes to multiple client databases. The nice thing is for the cursor itself, you can keep reusing the cursor itself just by changing the alteration script inside the cursor to do whatever task you want to do.

This handy tool has saved me a lot of time in my current position, but the more complicated of a change you are looking to make, the more temperamental this style can be. For instance, if you want to create a function, you can not do it in a cursor because you can’t start a “CREATE FUNCTION” script with a “USE” command. It has to start with “CREATE FUNCTION”. This can also be temperamental when you are trying to create a table and load data, because these scripts tend to get a little more fussy about running in a cursor like our example. To make these types of changes, we need to explore other options for running these scripts like SSIS and Powershell.

Tomorrow, I will bring you Part 2. We will go over how to do the same change in SSIS using variables in conjunction with connection string expressions.

SQL Server 2012 #SpecialOpsTour – Jacksonville Blog Series


Today, Microsoft’s #SpecialOpsTour rolled in to Jacksonville. Just in case you are not familiar with the event, allow me to explain. The #SpecialOpsTour is a traveling seminar that goes over the new features of SQL Server 2012. This blog series will cover the subjects covered during the event. I thought this would be a great thing to do for those who won’t be able to attend one of the twelve stops on the tour. I personally know how it feels to be “out of the loop” because of geographic challenges, so I will do all that I can to bring this and other events to others that are unable to attend. That’s what the #SQLFamily is about, right? Paying it forward.

This series of blog posts that I will publish over the next few weeks will cover the sessions that are offered at the event. As I am able I will throw in my own visions of these features that were discussed, any experience I have had with them, and some pointers on using them that I have come across while writing the articles.

So here’s the schedule of sessions/articles. These titles and descriptions of sessions are coming right off the agenda page on the Special Ops Tour site.:

SQL Server 2012 – The New World of Data 

  • SQL Server 2012 as the Mission Critical, cloud-enabled Data Platform, ready for the world of BigData. Learn about SQL Server 2012, as the Mission Critical, cloud-enabled Data Platform, ready for the world of BigData. We’ll introduce SQL Server 2012 key pillars: Mission Critical Evidence, Breakthrough Insights and Cloud on Your Terms. You will learn how SQL will enable mission critical confidence at a low TCO, and be introduced to the Fast Time to Solution scenarios with SQL Server Appliances, Developer Tools and blazing fast performance.

Introduction to SQL Server 2012 AlwaysON – Availability Groups

  • In this session, you’ll learn how to configure and implement SQL Server AlwaysOn Availability groups. We’ll explore the architecture, concepts, and steps for configuring High Availability in SQL Server 2012.

Self-Service BI using PowerPivot and PowerView

  • Discover the new capabilities of PowerPivot and Power View in SQL Server 2012, allowing people from all levels of the organization to unlock new insights with pervasive data discovery across structured, unstructured and cloud data resources, backed by self-service managed B.I. We’ll dive deep into the technical underpinnings and integration with SharePoint and Excel for rapid data exploration and visualization.

Understanding Microsoft Training & Certifications for Microsoft SQL Server 2012

  • Are you interested in becoming certified in SQL Server? Do you already hold a SQL certification and wonder about getting certified in the new program? In this session, Pete Harris will provide an overview of the certifications and training offered by Microsoft for SQL Server 2012. We’ll discuss the value of being certified, and then discuss the different certifications, upgrade paths and the related training options offered by Microsoft Learning.

There’s a lot of information to cover, and I can’t wait to share with those who aren’t as fortunate to have an opportunity to go.

See you next week.

New Year, New Opportunities, New Challenges


With the new year has come new opportunity for myself and my family. I am excited to announce that I have accepted a position in Jacksonville, FL as a Database Administrator. This is a great opportunity for me professionally to not only grow my skill set, but also to experience another corner of the IT industry by working for a software company. It’s also a great opportunity for my family to move back to Florida, a part of the country that we really consider home.

I am very excited about this opportunity and am looking forward to the opportunity to get back to work in the SQL Server world. Along with this new position, will be the opportunity to expand the subjects that I will blog on including transactional replication, DMV’s, and Maintenance Plans, just to name a few. Stay tuned. Things are about to get really exciting.

What are CTE’s?


I was chatting with someone last week and they wanted to know how much or how little I knew about T-SQL. They asked me what a CTE was. Now for a lot of people who have been around T-SQL for awhile, this might seem like a pretty elementary question, but for me, I had never been exposed to them, probably because the environments that I had worked in before were small enough that performance wasn’t as big of a consideration (jealous yet?). I’ll explain later in this article what a CTE has to do with performance. But first, let’s start with what a CTE is. But wait! Who of you knows what a CTE is?

Official Definition

CTE stands for Common Table Expression. A CTE is a temporary version of a SELECT query that is created at execution of the query that the CTE is contained in. CTE’s can be used in SELECT, INSERT, UPDATE, DELETE, and CREATE VIEW statements.

What makes a CTE different than a derived table or sub-query?

CTEs are more efficient than sub-queries and derived tables for a multitude of reasons. One distinction is, CTEs can be joined to themselves and be joined to multiple times unlike a derived table. Also, it is more efficient than writing a sub-query for the same use because a CTE is a stored set of data that can be continually referenced throughout a query, where-as a sub-query is not stored to be referenced and can make for high server overhead to complete.

Syntax for how to use CTE’s

You would create a create CTE as follows:

WITH CTE_NAME (column1, column2) AS

(SELECT column1, column2 FROM table1)

After the CTE is declared with the WITH statement, it can be referenced the same as any table or view within a query. The only requirement is that the statement referencing the CTE must be a part of the same executed SQL statement that declared the CTE because, like a derived column, once the statement is finished executing, the CTE result set disappears. For instance, see this example below:

WITH CTE_NAME (column1, column2) AS

(SELECT column1, column2 FROM table1)

SELECT column1, column2 FROM CTE_NAME

WHERE column2 > 100000

I would like to thank Kirk Kuykendall and the article he wrote Common Table Expressions. This helped me get the answers I need and gave me the background to write this article on my blog for my readers.

Please go ahead and like or comment on this article and let me know how I’m doing. Also, if you have a subject you would like me to blog on or questions about this article, you can also email me at joshua.luedeman@gmail.com.

SQL University


I can’t speak for anyone else, but I am at a point in my career development where I have narrowed myself down to either becoming a DBA or a Business Intelligence Developer.

As I was doing more research in to DBA as a career, I found a multitude of great resources, but one in particular caught my eye, SQL University. SQL University is the brain child @SQLChicken aka Jorge Segarra. It is a collaboration of SQL professionals from all over that have volunteered their time to post lessons on different aspects of SQL Server.

I have personally found this to be an awesome resource that has helped me gain a greater understanding of all things MS SQL. I’m currently plugging through the syllabus about a semester behind just from starting late.

If you have the opportunity to check this out, I highly recommend it. If you like what you see, while you are there please leave a c0mment for Jorge or the presenters and let them know how much you appreciate their hardwork.