Productivity Tip: ClipX Clipboard History Manager

April 18, 2011 Leave a comment

This post was written originally for my personal blog, but I decided to re-post it here since it is a different audience and the tip is probably more useful to the more technical readers I hope to have here.  Enjoy!

The other day I was at work and I realized how much time this tiny little application called ClipX saved me that I figured I would share it with those who read my blog. Basically it just runs all the time (in your system tray) and it remembers what you copied to your clipboard, up to a certain number of items (you can control that, I have mine set to 250 items). Unlike windows, which only holds the last thing you copied in memory, this application holds all of it, so you can easily go back and paste whatever you copied 2 “copies” ago. In order to get to that, once you have it installed you just press CTRL-SHIFT-V to pull up a special paste menu like the one shown here that shows all of the previous data you copied to your clipboard.

ClipX

ClipX

You can then either click on the piece of data you want to paste, or just arrow to it and hit enter. Sometimes nothing shows up right where I want to paste it, but if I just do the normal paste shortcut (CTRL-V) then, what I selected will show up.

Working with data all day, this definitely saves me a ton of time since I don’t have to re-copy something I copied a while back, I can just easily go back and re-paste it, even though I’ve copied something else since then. I must use ClipX 25-50 times a day. I can’t believe this hasn’t been included into the Windows OS by now. Hopefully you find ClipX as helpful as I do!

nt124142\SPR
Advertisements
Categories: Miscellaneous Tags:

SQLSaturday 71 – Boston

April 5, 2011 1 comment

In the last year or so I’ve heard a lot about the great SQLSaturday events from the twitterati, bloggers, and co-workers and have been looking for my chance to attend one.  Turns out I picked a pretty good one to attend April 2nd in Boston!  Some of the big-name speakers included Andy Leonard, Grant Fritchey, Adam Machanic, and Paul White among others.  Also the event was held at the top-notch Babson Executive Center with great conference rooms and an awesome lunch (more like dinner!).  In addition to the great speakers there were numerous vendors to talk to, each with their own drawing box, which each attendee could deposit one or all of their 16 raffle entries into.  These great vendors and sponsors gave away a couple iPad2s, a couple xBox360s with Kinnect and other sought after prizes.

Even though there were 30 sessions lined up for the day, one could only reasonably sit in 5 of them.  There were many others that I would have enjoyed listening too, but I read the extracts before hand and settled on the following five:

SSIS – The New Stuff by Matt Masson (blog | twitter)

This session was advertised as being given by Andy Leonard, but in reality Matt did all the talking with Andy chiming in at times.  This turned out to be a good thing since Matt works on the SSIS development team and provided some great insight into the team and it’s priorities with the upcoming SQL Server Denali version of SSIS.  Some of the things I found most interesting from Matt’s talk include:

  • Improved undo/redo support!
  • SSIS in Denali will make it easier to do diffs between versions
  • They’ve improved the deployment architecture to make it easier to deploy as well as easier for DBAs to manage SSIS packages in SSMS
  • Improvements made that allow flexible order of authoring, meaning you can start with the destination and work backwards, rather than the typical method of starting with the source and designing your flows toward the destination.  I don’t see this as being terribly useful to me in most of my packages, but it is nice to know that you can work in a more disconnected manner if you need to.
  • Packages managed by SSIS server will allow DBA to turn on and off automatic logging as needed without having to modify the package(s).

Database Design for mere developers by Steve Simon (blog | twitter)

I walked into this session expecting to pick up some general design tips for developers who might have to design their databases from the ground up, including schema etc…  What we got was a highly specific talk about using user-defined functions instead of querying directly from views.  Maybe I didn’t understand the abstract fully before I selected this session, but I tried to stay interested even though the topic covered was not what I expected.  I did pick up some tips about handling recursion in SQL and plan to read more about what Jacob Sebastion has written on the topic thanks to this session.

SQL Query Optimization – It’s Not Rocket Science! by Paul White (blog | twitter)

I made sure I arrived early for this session to secure a seat as I’ve heard lots of good things about Paul White and his content and presentation.  I was not let down.  Paul is an engaging speaker and really knows his stuff and how to interact with the audience.  I learned a lot about how the SQL Query Optimizer works in broad terms that were easy to understand.  About halfway through the talk I was starting to wonder if I was going to get any tips I would be able to use on a day to day basis to make the SQL Query Optimizer happier.  Thankfully Paul came through in the final 5 minutes to give us some “take homes” that can help create databases and queries that will allow the optimizer to come up with good plans for returning the data we want, which include:

  • Use good relational design (set-based methodologies)
  • Use constraints which will help the optimizer know what data is allowed in a given column (I need to read more about how this exactly helps the optimizer, but it seems like a good idea)
  • Watch the stats.  Out of date stats can hurt queries when data has changed a lot since the stats were generated.  The optimizer may do one thing based on stats, when the reality is it should be doing something else to more efficiently optimize your query
  • Use computed columns when needed.  I didn’t quite catch the exact reason behind this but will learn more about why this is helpful
  • Use new features with care, as they haven’t been as honed in relation to the Query Optimizer over the years as other tried and true features have (one example from Paul: Filtered Indexes)
  • Different syntax can improve your query in some cases.

SSIS Performance Design Patterns by Matt Masson (blog | twitter)

Another enjoyable SSIS session by Matt Masson that touched on a number of design patterns.  What I enjoyed most about this talk was hearing about some of the clients Matt has worked with and some of their misunderstandings to how SSIS works.  One particular client misunderstood that all the data being moved in SSIS is done in memory.  He said the client was used to using a different ETL tool which moved data differently and as a result had designed their ETL system in SSIS similarly, resulting in them moving data to a physical server half way around the world and then back to the destination server sitting in the same data center as the source!  Another anecdote he highlighted was a client who was using a large number of SSIS servers in concert to manage a huge ETL import process that needed to scale up quickly to handle mobile phone data logs.  This really put the puny SSIS feeds I’ve been working on lately into perspective 🙂

Some tips I picked up from this session include:

  • Matt encourages the Source->Staging->Destination model for many feeds where you save the data to a table at the destination and then merge in changes updates using T-SQL because of the performance benefits of SQL server over SSIS
  • Make sure SSIS is the right tool.  He’s seen clients where they use SSIS to move data to and from the same database! An Insert Into statement would have worked just fine.
  • Use the database engine when you can.  It performs better than SSIS and you can utilize indexes to shorten your load times
  • Use a scientific approach to tuning SSIS packages (Measure, Hypothesize, Modify, Re-Measure)
  • Create Benchmarks to track performance of your package (Calculate pre-execute and post-execute steps that measure key performance indicators)
  • Use the SQL engine for sorting, not SSIS when possible.  If you have sorted data already, tell SSIS so using the IsSorted property
  • SSIS is designed for parallel processing, so utilize that!

15 Powerful SQL Server Dynamic Management Objects by Adam Machanic (blog | twitter)

This presentation was in the last time block of the day, which made it a bit more difficult to follow along with all the great demos Adam put on for us.  I took a few notes but mainly focused on absorbing all kinds of useful information we can glean from the DMOs.  This is one area that I think will be come more and more useful to me as I acquire more production database support responsibilities as my career progresses.  It was nice to meet one of the guys behind SQLblog.com and author of the infamous Who Is Active stored procedure. I will definitely look into more of Adam’s blogs and scripts to learn more about using the Dynamic Management Objects in the future in order to put some of them to use.

Vendor Session: Axis Technology: Data Masking

Axis Technology is the maker of DMsuite which is a web-based data basking software product.  I was interested in this session because I know the challenges present when trying to create fake personally identifiable information in a test environment.  DMsuite appears to do a great job of cleansing the personal data and replacing it with fake, but meaningful replacement data.  I can definitely see myself having a use for this type of software at some point in the future in my career and it was good to see some of the things they can do to cleanse the data, yet keep it useable for testing/QA etc…

Overall SQLSaturday #71 Boston was a great success and I am looking forward to the next one I’m able to attend.  The only criticism I have is that for 4 of the 5 sessions I attended, the speakers seemed to be presenting sessions designed for longer time periods and not honed down well to allotted time (60 minutes).  I think this is attributed to the wealth of knowledge of these subject matter experts and the re-use of the presentations from different talks.  It would be nice to see them prepare ahead of time for a shorter presentation and not have to rush at the end to cover those last few slides.  All in all I can’t complain too much, since the event was provided for attendees for free!

Write better DML queries using SELECT

April 5, 2010 1 comment

I’m not sure when I first noticed this trick, but I think its use may have come to me after a session of working with Access.  In the query designer in Access you can click a button that transforms your query from Insert to Update and back to Select.  As a rookie sql server user I’m sure I made a few errant deletes/updates/inserts.  Soon after I learned to start ALL of my DML (Data Manipulation Language) queries, which include Inserts, Updates and Deletes, with a Select statement.  This allows you to see EXACTLY what data you are going to delete, insert or update before you actually do it.  I have caught many mistakes while crafting a SELECT statement before a DML statement that would have caused some headaches later if I had just jumped right into the DML form of the statement rather than doing the select first.

Obviously this is most beneficial with complex queries with many joins and conditions, but I will use a few simple examples to illustrate my point.  Let’s assume we have a table called CUSTOMER that contains customers and has a column customer_id to identify the customer and a column called status which can be set to either ‘A’ (active) or ‘I’ (inactive).

Say we want to delete the inactive customers, first do a select:

SELECT customer_id, status
FROM CUSTOMER c
WHERE c.status = ‘I’

We can then evaluate these results and make sure that they contain only the customers we want to delete.

We can then easily turn this into a delete statement by copying the FROM and WHERE clauses and adding DELETE:

DELETE c
FROM CUSTOMER c
WHERE c.status = ‘I’

Now, consider if instead of deleting these customers, we wanted to activate them instead.  Once again we can re-use the FROM and WHERE clause from our select which we know contains exactly the set of data we want to change:

UPDATE c
SET status = ‘A’
FROM CUSTOMER c
WHERE c. status = ‘I’

Finally, we can take the same SELECT, FROM and WHERE clause and use it in an INSERT statement.  Say we want to insert the inactive customers into an archive table, to clean up our active customer table in our OLTP system:

INSERT INTO archive_CUSTOMER
(customer_id, status)
SELECT customer_id, status
FROM CUSTOMER c
WHERE c.status = ‘I’

Even though turning a SELECT statement into an INSERT statement, it is still useful to run your SELECT first to ensure you have the data you are expecting to return before you actually insert the data into the new table.

Most times when creating my update or delete statements in a script I comment out my select portion, so that I can go back and troubleshoot a query if needed even after I have changed it into a DML query by simply highlighting everything after SELECT and running the query:

UPDATE c
SET status = ‘A’
–SELECT customer_id, status
FROM CUSTOMER c
WHERE c. status = ‘I’

This looks like a lot of extra work for these simple queries, but if you get in the habit of doing it for all queries, you will see the benefits when you are working on some more complex DML statements.

Categories: T-SQL Tags: , ,