About Me

I now work for Microsoft Federal in Chevy Chase, MD.

Dedicated to providing customer-driven, results-focused solutions to the complex business problems of today... and tomorrow.

At SQLTrainer.com, LLC  we understand that the technical challenges faced by businesses today are much greater in both scope and complexity than they have ever been. Businesses today are faced with understanding both local IT infrastructures as well as cloud-based technologies.

What is SQLTrainer.com?

Founded in 1998 by Ted Malone, SQLTrainer.com is a technical consulting, training and content development firm dedicated to the following core principles:

  • Technology Alone is NOT the Answer! - Implementing a particular technology because it is interesting or "cool" will not solve customer problems.
  • Technology Solutions do NOT need to be Overly Complex! - Many times developers and technical practitioners will attempt to build the most clever solution possible. While this serves to stroke the egos of those involved, it doesn't provide a maintainable solution.
  • Consultants Should be Mentors First! - When looking to hire an external consultant, businesses should look to the consultant who's willing to train themselves out of a paycheck.

Why the name, SQLTrainer.com?

SQL (pronounced See-Quell) stands for Structured Query Language, which is at the heart of every modern-day relational database system. Since many technology solutions today rely on some form of database storage or interaction, it was only logical to find a way to incorporate SQL into the name of the organization. Given that one of our core principles is to be a mentor/training above everything, the name SQLTrainer made sense. Since we also wanted to represent our embracing of the cloud, it seemed logical to add the ".com", referring to the biggest "cloud" of them all.

Live Feeds

RSS Feeds RSS
Tuesday, May 19, 2015 11:00:00 AM

Brian Mitchell, Microsoft Senior Business Program Manager, took the stage at Ignite two weeks ago to discuss Planning Your Big Data Architecture on Azure. “Big Data” requires big scale. Mitchell covered how this is accomplished through storage strategies, partitioning, fault tolerance, and the right architecture to scale to the extremes of big data. If you want to explore cloud-based big data apps, be sure to check it out.

Ignite covered the cloud in depth. Make sure to take a look at all the other on-demand presentations, demos, and more.

Friday, May 15, 2015 2:30:56 PM

Today, we are pleased to announce the availability of SQL Server 2014 Service Pack 1 (SP1). The Service Pack is available for download on the Microsoft Download Center. This upgrade is available to all customers with existing SQL Server 2014 deployments.

For more highlights of the release, please read the Knowledge Base Article for Microsoft SQL Server 2014 SP1. To obtain SQL Server 2014 SP1 with its improved supportability, please visit the links below.

 

Tuesday, May 5, 2015 8:51:00 AM

Guest post by Tiffany Wissner, Senior Director, Data Platform

Yesterday at Microsoft’s Ignite conference, we demoed the first sneak peek of Azure SQL Data Warehouse.  As you build more applications in the cloud and with the increase in cloud-born data, there is strong customer demand for a data warehouse solution in the cloud to manage large volumes of structured data and to process this data with relational processing for fast analytics.  Customers also want to take advantage of the cost-efficiencies, elasticity and hyper-scale of cloud for their large data warehouses.  They need for that data warehouse to work with their existing data tools, utilize their existing skills and integrate with their many sources of data.

To help address this need, last week at Build, we announced an enterprise-class elastic data warehouse in the cloud called Azure SQL Data Warehouse. There’s a number of distinctive features we’d like to highlight -- including the ability to dynamically grow and shrink compute in seconds independent of storage, enabling you to pay only for the query performance you need. In addition customers can choose to simply pause compute so that you only incur compute costs when needed. The Azure SQL Data Warehouse service gives customers the ability to combine relational and non-relational data hosted in Hadoop using PolyBase.

Azure SQL Data Warehouse is a combination of enterprise-grade SQL Server augmented with the massively parallel processing architecture of the Analytics Platform System, which allows the SQL Data Warehouse service to scale across very large datasets. It integrates with existing Azure data tools including Power BI for data visualization, Azure Machine Learning for advanced analytics, Azure Data Factory for data orchestration and movement as well as Azure HDInsight, our 100% Apache Hadoop service for big data processing.

Here are five reasons why enterprises should choose Azure SQL Data Warehouse:

1)      Enterprise-class cloud data warehouse built on SQL Server

SQL Data Warehouse extends the SQL Server family of products by extending the massive scale Analytics Platform System into the cloud. By offering an enterprise-class cloud data warehouse based on SQL Server, customers can take advantage of the developer skills and knowledge built over years working with the most widely deployed database in the world. The SQL Data Warehouse extends the T-SQL constructs you’re already familiar with to create indexes, partitions and stored procedures, which allows you to easily migrate to the cloud. With native integrations to Azure Data Factory, Azure Machine Learning and Power BI, customers are able to quickly ingest data, utilize learning algorithms, and visualize data born either in the cloud or on-premises. Watch the Build announcement video below for an overview of Azure SQL Data Warehouse and the integration of other Azure data services to help you gain insight into your business.

2)      Separating compute and storage enables a data warehouse to meet your needs

Azure SQL Data Warehouse independently scales compute and storage so customers only pay for the query performance they need. Unlike other cloud data warehouses that require hours or days to resize for additional compute power, SQL Data Warehouse allows customers to grow or shrink query compute in seconds. Since compute and storage scale independently, costs are much easier to forecast when compared to other competitive offerings. SQL Data Warehouse offers the right balance of compute and storage to meet your needs when you need them. This means that as a customer you can scale your resources based as your needs grow rather than invest in infrastructure for the future.

3)      Pause an instance to save costs

Dynamic pause enables customers to optimize the utilization of the compute infrastructure by ramping down compute while persisting the data. With other cloud vendors, customers are required to back up the data, delete the existing cluster, and, upon resume, generate a new cluster and restore data. This is both time consuming and complex for scenarios such as data marts or departmental data warehouses that need variable compute power.

4)      PolyBase in the cloud makes combining data sets easy

With the incredible growth of all types of data, the need to combine structured and unstructured data is essential. With PolyBase, SQL Data Warehouse offers the ability to combine data sets easily. SQL Data Warehouse can query unstructured and semi-structured data stored in Azure Storage, Hortonworks Data Platform, or Cloudera using familiar T-SQL skills making it easy to combine data sets no matter where it is stored. Other vendors, follows the traditional data warehouse model that requires data to be moved into the instance to be accessible. SQL Data Warehouse allows the data to stay in Hadoop and combine the results with relational data via common T-SQL constructs. This keeps your data costs low and lets you choose the query speed that you need.

5)      Hybrid infrastructure supports your needs on-premises and/or in the cloud

The SQL Data Warehouse service is an extension of the SQL Server family of products that offers an additional choice of data management to suit your business needs. Through support for a variety of first- and third-party products, the SQL Data Warehouse enables you to use the tools you use today to access, manage, manipulate and visualize data for faster insights. With SQL Data Warehouse you are able to quickly move to the cloud without having to move all of your infrastructure along with it. With the Analytics Platform System, Microsoft Azure and Azure SQL Data Warehouse, you can have the data warehouse solution you need on-premises, in the cloud or a hybrid solution.

To learn more about Azure SQL Data Warehouse, click here. You can also sign-up here to be notified once Azure SQL Data Warehouse preview is available later this year.

Monday, May 4, 2015 8:30:00 AM

Satya Nadella, CEO of Microsoft, announced SQL Server 2016, an intelligent platform for a mobile first, cloud first world.  The next major release of Microsoft’s flagship database and analytics platform provides breakthrough performance for mission critical applications and deeper insights on your data across on-premises and cloud. Top capabilities for the release include: Always Encrypted - a new capability that protects data at rest and in motion, Stretch Database - new technology that lets you dynamically stretch your warm and cold transactional data to Microsoft Azure, enhancements to our industry-leading in-memory technologies for real-time analytics on top of breakthrough transactional performance and new in-database analytics with R integration. 

Always Encrypted

Data security is top of mind, especially for mission critical applications, and SQL Server has been the enterprise database with the fewest security vulnerabilities six years running.*  To help customers with data security and compliance when using SQL Server on-premises or in the cloud, we are introducing Always Encrypted. Always Encrypted, based on technology from Microsoft Research, protects data at rest and in motion. With Always Encrypted, SQL Server can perform operations on encrypted data and best of all, the encryption key resides with the application in the customers trusted environment. Encryption and decryption of data happens transparently inside the application which minimizes the changes that have to be made to existing applications.


Stretch Database

Today, in the Ignite keynote, we showcased how you can gain the benefits of hyper-scale cloud in the box with new hybrid scenarios including Stretch Database. As core transactional tables grow in size, you may need to archive historical data to lower cost and to maintain fast performance. This unique technology allows you to dynamically stretch your warm and cold transactional data to Microsoft Azure, so your operational data is always at hand, no matter the size, and you benefit from the low cost of using Microsoft Azure.  You can use Always Encrypted with Stretch Database to extend your data in a more secure manner for greater peace of mind.

Real-time Operational Analytics & In-Memory OLTP

Building on our industry leading and proven in-memory technologies, customers will benefit from the combination of real-time operational analytics with blazing fast transactional performance - a first among enterprise vendors.  For In-Memory OLTP, which customers today are using for up to 30x faster transactions than disk based systems, you will now be able to apply this technology tuned for transactional performance to a significantly greater number of applications as well as benefit from increased concurrency.  With these enhancements, we also introduce the unique capability to use our in-memory columnstore delivering 100X faster queries with in-memory OLTP for in-memory performance and real-time operational analytics.

Built-in Advanced Analytics, PolyBase and Mobile BI

For deeper insights into data, SQL Server 2016 expands its scope beyond transaction processing, data warehousing and business intelligence to deliver advanced analytics as an additional workload in SQL Server with proven technology from Revolution Analytics.  We want to make advanced analytics more accessible and increase performance for your advanced analytic workloads by bringing R processing closer to the data and building advanced analytic capabilities right into SQL Server.  Additionally, we are building PolyBase into SQL Server, expanding the power to extract value from unstructured and structured data using your existing T-SQL skills. With this wave, you can then gain faster insights through rich visualizations on many devices including mobile applications on Windows, iOS and Android.

Additional capabilities in SQL Server 2016 include:

  • Additional security enhancements for Row-level Security and Dynamic Data Masking to round out our security investments with Always Encrypted.
  • Improvements to AlwaysOn for more robust availability and disaster recovery with multiple synchronous replicas and secondary load balancing.
  • Native JSON support to offer better performance and support for your many types of your data.
  • SQL Server Enterprise Information Management (EIM) tools and Analysis Services get an upgrade in performance, usability and scalability.
  • Faster hybrid backups, high availability and disaster recovery scenarios to backup and restore your on-premises databases to Azure and place your SQL Server AlwaysOn secondaries in Azure.

In addition, there are many more capabilities coming with SQL Server 2016 that deliver mission critical performance, deeper insights on your data and allow you to reap the benefits of hyper-scale cloud.

Last week at Build we announced exciting innovations to support our mission of making it easier to work with your data, no matter how big or complex.  We also shared how we are bringing capabilities to the cloud first in Azure SQL Database as with such as Row-level security and Dynamic Data Masking and then bringing the capabilities, as well as the learnings from running these at hyper-scale, back to SQL Server to improve our on-premises offering.  Thus, all our customers benefit from our investments and learnings in Microsoft Azure.  In addition to our hybrid cloud scenarios and investments in running SQL Server 2016 in Azure Virtual Machine, SQL Server delivers a complete database platform for hybrid cloud, enabling you to more easily build, deploy and manage solutions that span on-premises and cloud.

As the foundation of our end-to-end data platform, with this release of SQL Server we continue to make it easier for customers to maximize your data dividends. With SQL Server 2016 you can capture, transform, and analyze any data, of any size, at any scale, in its native format —using the tools, languages and frameworks you know and want in a trusted environment on-premises and in the cloud.

Be sure to visit the SQL Server 2016 preview page to read about the capabilities of SQL Server 2016 and sign-up to be notified once the public preview is available.  

* May 4, 2015 NIST ND

Wednesday, April 29, 2015 4:00:00 PM

Now’s the perfect time to deploy Microsoft SQL Server 2014. Sure, it enables faster data processing and performance, but that’s just the beginning. If ensuring business-critical performance, maintaining security and compliance, and optimizing your data infrastructure are important to you, Microsoft has the tools and resources to support your migration from SQL Server 2005.  

With the help of SQL Server 2014, leading hospital Beth Israel Deaconess Medical Center cut its query time from 45 to 10 seconds and can now query decades of historical data on demand with HDInsight. In addition, its IT team can implement new features without rewriting applications.

Discover the measurable difference SQL Server 2014 has made for other organizations—and can make for you too.

Read Forrester’s The Total Economic ImpactTM of Microsoft SQL Server, a commissioned study conducted by Forrester Consulting on behalf of Microsoft.

Wednesday, April 29, 2015 3:00:00 PM

In this mobile-first, cloud-first world, we’re creating and consuming data through new devices and services – and developers are building applications and analytics solutions at a rapid pace to take advantage of the new forms, types and sizes of data. As Scott Guthrie talked about in his keynote this morning, a big piece of what we’ve been working on and will continue to invest in, is making it easier to work with all your data – no matter how big or complex – and how to build new applications utilizing data to take advantage of the intelligent cloud. Today, we’re pleased to share three major data platform announcements: Azure SQL Database elastic database, Microsoft’s new offering to support SaaS applications; Azure SQL Data Warehouse, a fully managed relational data warehouse-as-a-service; and Azure Data Lake Microsoft’s hyper-scale data store optimized for big data analytic workloads.

Azure SQL Database elastic databases

As customers look to ease and expedite building and managing applications, the scale, simplicity and economics of the cloud are impossible to ignore. With new capabilities and enhanced security features, Microsoft’s relational database-as-a-service, Azure SQL Database, can support robust enterprise applications in the cloud as well new SaaS applications, including:

  • Elastic databases – available in preview today – allow you to build SaaS applications to manage large numbers of databases that have unpredictable resource demands. Managing dynamic resource needs can be more art than science, and with these new capabilities, you can pool resources across databases to support explosive growth and profitable business models. Instead of overprovisioning to accommodate peak demand, cloud ISVs and developers can use an elastic database pool to share resources across hundreds – or thousands – of databases within a budget that they control. Additionally, we are making tools available to help query and aggregate results across these databases as well as implement policies and perform transactions across the database pool. 



    Create a pool of elastic databases to scale and share resources across unpredictable demands.

  • New security capabilities for managing data and applications in Azure: Row-level security and Dynamic data masking are already currently in preview, and new in preview today is Transparent data encryption. Transparent data encryption has been a top request from customers and we are excited to bring this to market building on the other advanced security features already available in preview.

  • Preview of Full-text search capabilities in Azure SQL Database to support richer search capabilities in new cloud applications. With this and other features such as the in-memory columnstore and parallel query, we continue to bring the benefits from the decades of innovation in query processing technologies on-premises to the cloud and make it even easier to migrate existing on-premises SQL Server applications to the cloud.

Azure SQL Data Warehouse

As customers move more applications and structured data in the cloud, we’ve seen strong demand for additional options for cloud-based data warehousing and analytics. Scott also announced Azure SQL Data Warehouse, a new, first-of-its-kind elastic data warehouse in the cloud. It’s the first enterprise-class cloud data warehouse that can dynamically grow, shrink and pause compute in seconds independent of storage, enabling you to pay for the query performance you need, when you need it. Azure SQL Data Warehouse is based on the massively parallel processing architecture currently available in both SQL Server and the Analytics Platform System appliance, and will work with existing data tools including Power BI for data visualization, Azure Machine Learning for advanced analytics, Azure Data Factory for data orchestration and Azure HDInsight, our 100% Apache Hadoop managed big data service. The preview for Azure SQL Data Warehouse will be available later this calendar year.

Introducing Azure SQL Data Warehouse

Azure Data Lake

For customers looking to maximize value on unstructured, semi-structured and structured data, we announced Azure Data Lake, a hyper-scale data store for big data analytic workloads. Azure Data Lake is built to solve for restrictions found in traditional analytics infrastructure and realize the idea of a “data lake” – a single place to store every type of data in its native format with no fixed limits on account size or file size, high throughput to increase analytic performance and native integration with the Hadoop ecosystem. Azure Data Lake is a Hadoop File System compatible with HDFS that is integrated with Azure HDInsight and will be integrated with Microsoft offerings such as Revolution-R Enterprise and industry standard distributions like Hortonworks and Cloudera. The preview for Azure Data Lake will be available later this calendar year.

Microsoft Azure data lake supports multiple big data analytic workloads

 

Try and sign up for new previews today

The move to the cloud is accelerating across industries, and we are proud to provide a comprehensive database and analytics platform that enables you to more easily work with big data and extract as much value as possible from your data to accelerate your business. Additionally, over the last few months we’ve had the opportunity to share with you a wave of new platform offerings and innovations, from the general availability of the latest Azure SQL Database release bringing near-complete compatibility with SQL Server, our preview of the first managed service running on Linux with HDInsight and the general availability of new cloud services such as Azure DocumentDB and Azure Search. With today’s announcements, we’re build on our existing investments and continuing to make it easier for customers to capture, transform, and analyze any data, of any size, at any scale – using the tools, languages and frameworks they know and want in a trusted environment on-premises and in the cloud.

Try out the Azure SQL Database previews made available today and sign up to be notified as the Azure SQL Data Warehouse and Azure Data Lake previews become available. Stay tuned for more on Microsoft’s data platform at next week’s Ignite conference in Chicago.

Thursday, April 23, 2015 1:00:00 PM

Last week we announced the General Availability of Azure Premium Storage for Azure Virtual Machines.

Premium Storage provides steady high throughput (up to 64,000 IOPs; 8x more than Standard Storage) at low latency (single-digit milliseconds; 8x less than Standard Storage). This enables enterprise SQL Server workloads (OLTP or Data warehousing) needing consistent high performance to run on Azure VM.

During the Preview of Azure Storage we worked with many SQL Server customers of different workload sizes to ensure that Premium Storage satisfied their requirements on Azure VM.  Here are some examples of customer results:

  • Transaction latency for thousands of concurrent users consistently within 10ms
  • Query times over large data sets reduced from minutes in standard storage to seconds
  • Batch loads for millions of records reduced from hours in standard storage to minutes
  • Backup/restore times on large databases reduced from many hours in standard storage to less than one hour

Azure Premium Storage and SQL Server

Premium Storage is based on Solid State Disks (SSD) in the storage backend, dedicated fast connections between the storage backend and new compute clusters, and VMs local read-only caches that are also SSD-based. Writes are sent to the backend to guarantee their persistence via 3 copies. Writes trigger an update on the VM read-only cache. Reads that can be served from the cache return immediately; others are served quickly from the backend, also updating the cache as a result. More details here.

VMs using Premium Storage get a guaranteed higher storage bandwidth to serve writes and reads. Reads served from the cache are not counted towards this bandwidth. The high bandwidth allows writing and reading more data per second to the storage. This increases transaction throughput and reduces the time for query scans and other operations such as backup/restore, batch loads, and index rebuilds.

The following PerfMon picture shows a SQL Server backup consistently reading and writing ~500MB/s:

The main benefit of the fast storage writes is lowering SQL Server transaction latency. This is achieved by shortening the time to synchronously write commit records to the log file. This benefits both standalone and AlwaysOn configurations, where the secondary must ACK writing commit records. Besides this, the fast storage writes reduce the time for other SQL Server write operations (e.g. checkpoints (asynchronously writing dirty pages to disk), AlwaysOn secondary’s log redo, etc).

The main benefit of fast reads is lowering SQL Server query time. This is achieved by shortening the time to retrieve data pages, especially if served from the read-only cache. In addition, the higher storage bandwidth aids retrieving more data pages. The benefit of the read-only cache is for data files, as data pages are read very frequently. There is no benefit for log files, as log records are only read during distant operations (e.g. backups).

The following PerfMon picture shows a SQL Server workload executing an average of 9K Batch Requests per second. This accounts for 20K reads and 17K writes per second (37K IOPs). The average read latency is just 1ms with a max of 6ms, and the average write latency is just 3ms with a max of 10ms.

Azure Premium Storage Options

There are 3 types of Premium Storage disks to choose from: P10, P20, and P30. The type of disk is determined by its size. Each disk type is assigned a different number of IOPs and bandwidth:

Disk Type

Disk Size

Storage IOPS

Storage Bandwidth (MB/s)

P10

128 GB

500

100

P20

512 GB

2300

150

P30

1024 GB

5000

200

 

To support Premium Storage, there is a new series of VMs called DS-Series. The capabilities of these VMs are below:

 VM Size

CPU Cores

Max

Storage Disks

Max Storage Space

Max Storage IOPS*

Max Storage Bandwidth (MB/s)

Cache size (GB)

DS1

1

2

2 TB

3,200

32

43

DS2

2

4

4 TB

6,400

64

86

DS3

4

8

8 TB

12,800

128

172

DS4

8

16

16 TB

25,600

256

344

DS11

2

4

4 TB

6,400

64

72

DS12

4

8

8 TB

12,800

128

144

DS13

8

16

16 TB

25,600

256

288

DS14

16

32

32 TB

50,000

512

576

  * Doesn’t include IOPs directly from the VM read-only cache

Notice that the total number of IOPs and bandwidth will depend on the combination of VM size, number of disks, and the sizes of these disks.

Consider the size of your database, workload requirements, and pricing when choosing the above. Notice that a VM can have disks with different sizes and, it’s even possible to mix disks from Premium and Standard storage. More details here.

Creating a new SQL Server Virtual Machine using Premium Storage

  1. Go to the new Azure Portal
  2. Create a storage account of type Premium Locally Redundant

    Notice that there is a limit of 32TB per storage account. If you need more storage, then create more storage accounts.
  3. Create a new VM using a SQL Server Image from the Gallery, specifying a DS-Series VM, and the Premium Storage account that you previously created (type PREMIUM-LRS). Notice that this VM can’t be added to resource groups that have other VM Series (DS-Series are hosted by new compute clusters).

  4. Attach disks to the VM
    Select the VM that you previously created, go to Disks, and select Attach New. Choose the Premium Storage account that you previously created, a container for the disk (by default vhds), the disk file name, size, and caching. A common basic configuration is using 3 disks, one for data, another for log, and another for TempDB.

Migrating an existing SQL Server to Premium Storage

Notice that it’s not possible to upgrade an existing Standard Storage account to Premium Storage and that DS-Series VMs can’t be added to a Resource Group that have other VM Series. 

To migrate an existing SQL Server to Premium Storage please create a new DS-Series VM that uses a Premium Storage account. Then backup and restore your databases and copy your SQL configuration (equivalent to a side-by-side migration).

To reduce downtime during the migration to few minutes:

  1. Take a full backup of the databases and restore them to the new SQL VM
  2. Disconnect the clients from the databases in the old SQL VM
    ALTER DATABASE SET SINGLE_USER WITH ROLLBACK AFTER 20 SECONDS
  3. Take a log backup of the databases (for any final transactions) and restore them to the new SQL VM
  4. Change the clients connection string to point to the new SQL VM

If you are using SQL AlwaysOn Availability Groups you can minimize downtime during the migration to seconds. Availability Groups allow you to failover a group of databases from a primary SQL Server replica to a secondary SQL Server replica in seconds without data loss. In addition, applications connect to the primary replica using a listener (virtual network name), so their connection string doesn’t need to change.

You can add a synchronous secondary SQL Server replica in a DS-Series VM that uses Premium Storage and failover to it. Notice that you will need to add the secondary replica VM to the same Windows Domain and the same Windows Cluster as the primary replica. In addition, you will need to create an endpoint for the secondary replica VM and add it to the load balancer supporting the Availability Group listener.

More details here.

Performance Best Practices

Most existing performance best practices apply to Premium Storage. Premium Storage disks have much higher bandwidth and IOPs limits than Standard Storage disks, thus a smaller number of Premium Storage disks will satisfy the performance requirements (especially for P30 disks). Consider the bandwidth and IOPs limits of the DS-Series VM sizes when determining the number and types of disks.

To get the highest performance:

  1. Use a Premium storage account and VM in the same region
  2. Use separate disks for data files, log files, and TempDB files
  3. Enable the read-only cache for data disks and TempDB disks, but not for log disks
  4. If you need higher bandwidth or IOPs: use Storage Spaces over multiple disks to aggregate their IOPs, bandwidth, and storage space. Use separate Storage Spaces for data and log. Depending on your TempDB requirements you could put TempDB in the Storage Pool for data files or in a different pool.

Summary

Premium Storage provides steady high throughput at low latency. This enables enterprise SQL Server workloads (OLTP or Data warehousing) needing consistent high performance to run on Azure VM.

Many SQL Server customers of different workload sizes have satisfied their requirements on Azure VM using Premium Storage. We hope that you will too!

Premium Storage is available in the following regions: West US, East US 2, West Europe, East China, Southeast Asia, West Japan. It’ll become available in other regions soon.

 

Learn more about SQL Server on Azure VM and try Premium Storage today!

Tuesday, April 21, 2015 9:00:00 AM

Don’t get caught off-guard.

Upgrading to Microsoft SQL Server 2014 helps you maintain security and compliance, gain faster data insights, and optimize your data infrastructure—all using familiar tools you already know and trust.

Be prepared when extended support for SQL Server 2005 ends. Protect your data with a thoughtful upgrade plan and mitigate cost and risk. Read this complimentary report today to learn more about the robust backward compatibility features and tools that enable a range of migration options. 

Read Directions on Microsoft’s report: Migrating from SQL Server 2005

Friday, April 17, 2015 5:40:00 PM

May 15, 2015 Update: SQL Server 2014 Service Pack 1 is now available for download

+++

On Wednesday, April 15, Microsoft launched SQL Server 2014 Service Pack 1 (SP1).  Shortly after release we discovered an installation issue.  As a result, we have put the Service Pack downloads on hold.

SQL Server 2014 (SP1) will be re-released in the next few weeks. In the meantime, we recommend that you continue using SQL Server 2014 and its Cumulative Updates.

If you have encountered an issue during installation of the Service Pack, this article on the Release Services Blog will provide workaround steps.

Thursday, April 16, 2015 9:00:00 AM

In honor of the upcoming PASS Business Analytics conference, we wanted to take some time to spotlight the great work happening in the SQL and BI communities across the world. The conference is focused on business analytics, but PASS offers many great community activities for SQL Server and beyond. Learn about the various local and digital opportunities to connect with the PASS community here.

Name: Luan Moreno Medeiros Maciel

Role: SQL Server Database Consultant at The Pythian Group

Location: Brazil

 

What is an exciting project that you’re working on right now?

I have a few interesting projects going on right now…

Hybrid Cloud on SQL Server: This projects consists in a cloud between Amazon and the local DataCenter of the client. I’m designing this solution utilizing Database Mirroring on SQL Server 2012. After the implementation the client will be able to have a High Available solution between two sites.

In-Memory OLTP a.k.a “Hekaton” Feature Implementation: Implementing tables In-Memory for a medium company. The idea is to provide a big table In-Memory that will absorb all the requests and then, at a specific time of the day, flush the data to disk-based tables.

Partitioning in SQL Server 2012: Partitioning a big table to provide better maintenance and velocity in the data request. The data will be split by month.

Creating DW (Data Warehouse) – Educational Company: Creating the complete Data Warehouse process for a client who needs to analyze data to be used with end visualization tools in Excel (PowerView, PowerMap and PowerPivot).

 

What are your current analytics and/or database challenges, and how are you solving them?

I have a few challenges I am currently navigating. The first is determining the proper tables to be used in an In-Memory implementation. To help choose the correct tables to put In-Memory, I am using the Data Collector to analyze the client workload and generate insights. The second is creating the right Table Partitioning to improve one of my clients’ weekly maintenance. To do this I am analyzing the client server/database/maintenance to check what options are available and to determine the specific timeframe to apply the changes. I am also consulting some DMV’s and the MSDB database to check the jobs in this instance.

 

How does data help you do your job better?

Data is the center of my globe; I need to analyze, maintain, sustain, and provide insights and solutions to my clients. All of this can happen because of data.

 

What types of data-related challenges do you frequently see with customers, and how are you solving them?

Working with customers requires you to understand the customer environment, check what you have available to accomplish the work required, and then provide the best solution possible.

 

What’s your favorite example of how data has provided an insight, a decision, or a shift in how business gets done?

One of my clients had several problems related to data analysis. Because their data was not organized in the correct manner in their database, the client couldn’t provide solutions to front-end users. To solve the problem my team created a Data Quality Services structure with the client, organizing the data so that the data insights of the OLTP system could be recognized. As a result of these improvements, the client started to provide his own insights with the new, reinvented data.

 

What or who do you read, watch, or follow to help grow your data skills? (book, blog, Twitter handle, podcast, online course…anything!)

 

What’s your favorite SQL command and/or Excel function and why?

ALTER DATABASE imoltp
ADD FILEGROUP imoltp_mod CONTAINS MEMORY_OPTIMIZED_DATA

 

This command gives me the certainty that I will work with the In-Memory OLTP (a.k.a “Hekaton”).

 

Thanks for joining us, Luan!

Know someone doing cool work with data? Nominate them for a spotlight in the comments.

Site Map | Printable View | © 2008 - 2015 SQLTrainer.com, LLC | Powered by mojoPortal | HTML 5 | CSS | Design by mitchinson