viernes, 5 de agosto de 2016

Diseñar particiones para administrar subconjuntos de datos

SQL Server

https://technet.microsoft.com/es-es/library/ms191174(v=sql.105).aspx

Diseñar particiones para administrar subconjuntos de datos


SQL Server 2008 R2

Si crea particiones de una tabla o un índice, puede mover subconjuntos de datos de forma rápida y eficaz con la instrucción Transact-SQL ALTER TABLE...SWITCH de las maneras siguientes:
  • Puede agregar una tabla como partición a una tabla con particiones ya existente.
  • Puede cambiar una partición de una tabla con particiones a otra.
  • Puede quitar una partición para crear una sola tabla.
Estos escenarios pueden resultar útiles si desea agregar nuevos datos a una tabla con particiones y quitar datos antiguos de la misma tabla con particiones de forma regular. Esta operación puede implicar grandes o pequeñas cantidades de datos en diferentes escenarios. Si los nuevos datos que agrega deben cargarse, limpiarse o transformarse, se pueden tratar como una entidad independiente antes de agregarlos como una partición. Los datos antiguos se pueden archivar o guardar en el almacén. Independientemente del tamaño de la colección, la transferencia es rápida y eficaz ya que, a diferencia del caso de una instrucción INSERT INTO SELECT FROM, los datos no se mueven físicamente. Solo cambian de una partición a otra los metadatos relativos a la ubicación del almacenamiento.

Caso de ejemplo


En el escenario de particiones de la base de datos de ejemplo AdventureWorks2008R2, Adventure Works Cycles archiva sus datos antiguos de la tablaTransactionHistory en una tabla TransactionHistoryArchive; para ello, traslada las particiones de una tabla a otra. Esto se lleva a cabo mediante la creación de particiones de TransactionHistory en el campo TransactionDate. El intervalo de valores de cada partición es un mes. La tabla TransactionHistory conserva las transacciones actuales del año y TransactionHistoryArchive guarda las transacciones anteriores. Al crear las particiones de las tablas de esta forma, los datos antiguos del año correspondientes a un mes se pueden transferir mensualmente de TransactionHistory a TransactionHistoryArchive.

jueves, 23 de junio de 2016

UPDATE Statement when updating one table with data from another table

SQL Server

http://www.techonthenet.com/sql_server/update.php

SQL Server: UPDATE Statement

This SQL Server tutorial explains how to use the UPDATE statement in SQL Server (Transact-SQL) with syntax and examples.

Description

The SQL Server (Transact-SQL) UPDATE statement is used to update existing records in a table in a SQL Server database. There are 3 syntaxes for the UPDATE statement depending on whether you are performing a traditional update or updating one table with data from another table.

Syntax

The syntax for the UPDATE statement when updating one table in SQL Server (Transact-SQL) is:
UPDATE table
SET column1 = expression1,
    column2 = expression2,
    ...
[WHERE conditions];
OR
The syntax for the UPDATE statement when updating one table with data from another table in SQL Server (Transact-SQL) is:
UPDATE table1
SET column1 = (SELECT expression1
               FROM table2
               WHERE conditions)
[WHERE conditions];
OR
The syntax for the SQL Server UPDATE statement when updating one table with data from another table is:
UPDATE table1
SET table1.column = table2.expression1
FROM table1
INNER JOIN table2
ON (table1.column1 = table2.column1)
[WHERE conditions];

Parameters or Arguments

column1, column2
The columns that you wish to update.
expression1, expression2
The new values to assign to the column1column2. So column1 would be assigned the value of expression1column2 would be assigned the value of expression2, and so on.
WHERE conditions
Optional. The conditions that must be met for the update to execute.

Example - Update single column

Let's look at a very simple SQL Server UPDATE query example.
For example:

SQL Jobs on a calendar

SQL Server

https://thelonedba.wordpress.com/2016/04/30/sql-jobs-on-a-calendar




SQL Server backup jobs are IO-intensive, he says, stating the obvious.
When you have a lot of servers which have been set up over the years, and you’re new to the environment, and want to see when storage is being hammered , you might end up thinking that a calendar view might be of help – I know I did.
So all I had to do was figure out a query to look at SQL Server scheduled task history complete with start & end times (which I’d already got), and then get those into a series of appointments in Outlook so I could see when the busy times were.

Step 0: Registered Servers Groups

Step 0?  Well, this isn’t directly related to what I’m doing, but helps a lot.
If you don’t know about Registered Servers, then go and find out about them now.  I’ll wait.  They’re a great way of logically grouping your server estate and allow you can run queries against multiple servers at the same time pulling results back into a single result set.
I’m using them from a Central Management Server; this provides a single shared location that gives the whole team a view of the SQL Server Estate, while removing the overhead of everyone keeping their own lists.
Start working with Registered Servers groups by launching SQL Server Management Studio, and hitting CTRL+ALT+G or selecting “Registered Servers” on the “View” menu.
LaunchRegisteredServers
ViewRegisteredServers
(I’ve blocked out the actual server name).
As you can see, we have six groups of servers defined.  What you can’t see is that there are folders within those folders that contain multiple servers, so you can keep more organised lists.  More of how this stuff works another time.  Maybe.
I’m interested in the top tier “gold standard” servers, so I’ll select “1 Gold”, and click to create a new query.  This will take a while – a single query window will open and attempt to connect to all the SQL Servers registered in that group.

Step 1: Query for SQL Job Runtime Information

Of course, this assumes that we’re holding on to SQL Server Agent Job History for a little while – the default job history retention settings might be too aggressive to allow you to get history going back any reasonable length of time.

miércoles, 6 de abril de 2016

Managing the size of the SQL Server SSIS catalog database

SQL Server

https://www.mssqltips.com/sqlservertip/3307/managing-the-size-of-the-sql-server-ssis-catalog-database/

Escenario

I have implemented the SSIS catalog that comes with SQL Server 2012 and I love the reporting and centralization that comes with it, however, I noticed after a few months that the database was growing exponentially. How can I keep the size of this database to a minimum?


Configuration

The SSIS catalog is the central point for working with Integration Services (SSIS) projects that you’ve deployed to the SSIS server. When the SSIS catalog is implemented, the SSISDB database is automatically created. The scope of this tip will focus on the SSISDB growth more than the SSIS catalog itself.
SSIS projects, packages, parameters, environments, and operational history are all stored in the SSISDB database so if you have hundreds of SSIS packages or packages that run every few minutes you could see how the database storing all the historical information would grow exponentially.
Also included when you enable this feature is a SQL Server Agent Job called SSIS Server Maintenance job:
A SQL Server Agent job called SSIS Server Maintenance job.
Inside this job are two steps, SSIS Server Operation Records Maintenance and SSIS Server Max Version Per Project Maintenance, that will help clean up the database. By default, this job is set to run at 12:00am nightly which is sufficient:
SSIS Server Operation Records Maintenance and SSIS Server Max Version Per Project Maintenance
Looking at the first step, SSIS Server Operations Records Maintenance, you will notice that it executes a stored procedure named internal.cleanup_server_retention_window. This sounds like it could be the stored procedure that cleans up history:

Integration Services Logging Levels in SQL Server 2016

SQL Server

https://www.mssqltips.com/sqlservertip/4034/integration-services-logging-levels-in-sql-server-2016/

Escenario

At the time of writing, SQL Server 2016 preview (CTP 2.3) has been released and there are some changes for the Integration Services (SSIS) Catalog logging levels. This tip will explain what is new and how you can use it.


Configuration

Integration Services Logging Levels

In SQL Server 2012, the SSIS catalog was introduced. This catalog provides a central administration platform for the SSIS environment. One of its features is the logging levels, where each logging level specifies which events are logged to the SSIS catalog. Do not confuse this with the log providers of an SSIS package. The log providers log events to a specific destination: SQL Server, a flat file, an XML file and so on. A logging level is a configuration option for the built-in logging of the SSIS catalog: every time you run an SSIS package in the catalog, events are logged to tables in the SSISDB database.
In its initial release and in SQL Server 2014, Integration Services had 4 logging levels:
  • None - logging is turned off.
  • Basic - all events are logged, except custom and diagnostic events. The name is quite misleading, as a lot of information is logged. This is the default logging level of the SSIS catalog.
  • Performance - only performance statistics and the OnWarning and OnEvent are logged. This logging level might log less information than the basic logging level, but it depends on the number of data flows and their complexity. Data flow performance information of this logging level is shown in the catalog report Execution Performance.
  • Verbose - all events are logged. Only use this logging level for advanced troubleshooting or fine tuning.
In the SQL Server 2016 preview, a new logging level has been added and there is now the possibility to create your own custom logging levels. Both will be explained in the following sections.

SQL Server 2016 preview

As mentioned earlier, at the time of writing SQL Server 2016 is still in preview (at the time of writing CTP 2.3 has been released). This means that the features of Integration Services and its catalog can still change and that functionality might change, disappear or be added.

The RuntimeLineage Logging Level

Currently, not a lot of information is out there on this logging level. The documentation states: Collects the data required to track lineage in the data flow. My guess is this logging level might be used in other features/products to track the lineage of the data; i.e. where does the data come from and where does it go to? In order to test this logging level, I created a very basic package:
Simple Data Flow
It reads data from a table, adds a custom column, merges streams and writes the data to a table. When the package is executed, we can see in the catalog view catalog.event_message_context that extra information about the dataflow has been logged: