About Me

I now work for Microsoft Federal in Chevy Chase, MD.

Dedicated to providing customer-driven, results-focused solutions to the complex business problems of today... and tomorrow.

At SQLTrainer.com, LLC  we understand that the technical challenges faced by businesses today are much greater in both scope and complexity than they have ever been. Businesses today are faced with understanding both local IT infrastructures as well as cloud-based technologies.

What is SQLTrainer.com?

Founded in 1998 by Ted Malone, SQLTrainer.com is a technical consulting, training and content development firm dedicated to the following core principles:

  • Technology Alone is NOT the Answer! - Implementing a particular technology because it is interesting or "cool" will not solve customer problems.
  • Technology Solutions do NOT need to be Overly Complex! - Many times developers and technical practitioners will attempt to build the most clever solution possible. While this serves to stroke the egos of those involved, it doesn't provide a maintainable solution.
  • Consultants Should be Mentors First! - When looking to hire an external consultant, businesses should look to the consultant who's willing to train themselves out of a paycheck.

Why the name, SQLTrainer.com?

SQL (pronounced See-Quell) stands for Structured Query Language, which is at the heart of every modern-day relational database system. Since many technology solutions today rely on some form of database storage or interaction, it was only logical to find a way to incorporate SQL into the name of the organization. Given that one of our core principles is to be a mentor/training above everything, the name SQLTrainer made sense. Since we also wanted to represent our embracing of the cloud, it seemed logical to add the ".com", referring to the biggest "cloud" of them all.

Live Feeds

RSS Feeds RSS
Friday, January 30, 2015 9:00:00 AM

Yesterday we announced exciting news for Power BI – a cloud-based business analytics service (software-as-a-service) for non-technical business users.  The preview introduces a number of new Power BI capabilities including dashboards, new visualizations, support for popular software-as-a-service applications, a native iPad app and live “hybrid” connectivity to on-premises SQL Server Analysis Services tabular models. With just a browser – any browser – or a Power BI mobile app, customers can keep a pulse on their business via live operational dashboards. They can explore their business data, through interactive visual reports, and enrich it with additional data sources.

How does it work with SQL Server?
To interact with SQL Server data in Power BI, connect to SSAS server via the ‘Get Data’ menu. From there, you can connect to a model and run queries for visualizations based on that model. Before your users can connect to an SSAS model, an administrator must configure a Power BI Analysis Services connector.

 

To learn more about the Power BI preview, watch as Michael Tejedor gives Jeremy Chapman from Office Mechanics a first look at what’s new.

Thursday, January 29, 2015 11:15:00 AM

By Tiffany Wissner, Sr. Director for Data Platform Marketing

Today we are announcing general availability for the latest update to Azure SQL Database, introduction of new SQL Database security features, more automation for SQL Server in Azure Virtual Machines and SQL Server on G-Series VMs.  Take a closer look at these exciting new improvements.

1.       New update to Azure SQL Database Generally Available in Europe

Generally available today in Europe, the latest version of SQL Database introduces near-complete SQL Server engine compatibility, greater support for larger databases, and expanded Premium performance. Internal tests on over 600 million rows of data show Premium query performance improvements of around 5x in the new preview relative to today’s Premium SQL Database and up to 100x when applying the In-memory columnstore technology. 

General availability will continue across United States regions on February 9, 2015 with rollout to most datacenters worldwide by March 1, 2015. General availability pricing will take effect for databases on V12 servers worldwide on April 1, 2015.

“As a company committed to maintaining the highest innovation standards for our global clients, we’re always eager to test the latest features,“ said John Schlesinger, Chief Enterprise Architect at Temenos. “So previewing the latest version of SQL Database was a no-brainer for us. After running both a benchmark and some close-of-business workloads, which are required by our regulated banking customers, we saw significant performance gains including a doubling of throughput for large blob operations, which are essential for our customer’s reporting needs.” 

2.       New Security Features for Azure SQL Database

Today also marks the introduction of a suite of security features coming to the latest version of SQL Database; Row-Level Security, Dynamic Data Masking, and Transparent Data Encryption. These security features will join the existing Auditing feature to help customers further protect their cloud data and help further meet corporate and industry compliance policies. Available in public preview today across all of the new service tiers, customers can implement Row-level Security on databases to enable implementation of fine-grained access control over rows in a database table for greater control over which users can access which data.

Coming soon, SQL Database will also preview Dynamic Data Masking which is a policy-based security feature that helps limit the exposure of data in a database by returning masked data to non-privileged users who run queries over designated database fields, like credit card numbers, without changing data on the database. Finally, we are excited to announce that Transparent Data Encryption is coming to SQL Database V12 databases for encryption at rest. As data security is at the top of mind for customers building new applications in the cloud, these new security features will be available the Basic, Standard and Premium service tiers. 

3.       More simplified availability, setup, backup, and patching for SQL Server in an Azure VM

SQL Server AlwaysOn technology provides both the high availability and disaster recovery capabilities needed for mission critical applications.  One of the challenges is getting the HA environment setup, as it’s not trivial.  Now with new auto HA setup capabilities using the AlwaysOn Portal Template added for SQL Server in Azure VMs, this really becomes a simpler task, freeing up your valuable time and resources to focus on other business priorities.  This automated setup provides listener configuration and provisions AlwaysOn VM cluster to meet HA and DR requirements.  This new capability applies for hybrid scenarios where you might be setting up failover for an on-premises SQL Server workload or a cloud only SQL Server workload when you want to setup failover from one Azure region to another.

Backups for data security are easier now as well with the ability to automate full SQL Server backups from an Azure VM to Azure Storage.  Additionally, SQL Server patches delivered through Windows Update also get better with new auto patching capability that gives you more granular control over the windows update scheduler for predictable timing of updates. Monitoring and managing SQL Server instances running in Azure VMs gets better as well with the ability to view and manage SQL Server alerts directly through the Azure Portal.

The new suite of new capabilities along with previously released Azure VM capabilities  helps make running large enterprise SQL Server workloads in Azure VMs more efficient than ever before.

4.       SQL Server on Massive G-Series VMs

The new G-Series VMs are ideal for large SQL Server OLTP workloads, especially combined with SQL Server 2014 in-memory OLTP technology, as the new Azure G-Series VMs offer up to 32vCPUs, 448GB of memory, and 6.59TB of local SSD.  Combine this with the in-memory OLTP technology built-in to SQL Server 2014, you can now maximize the performance of these large VMs by taking advantage of unique parallel processing in-memory architecture that removes database dead locks while ensuring 100% durability.  This means not only can you get x factor transactional performance gains, but you can also gain x factor improvement in concurrency by taking full advantage of 32vCPUs. 

Great options in Azure for your SQL Server enterprise workloads

Whether you are looking to run your SQL Server workload in an Azure virtual machine or via the SQL Database managed service, there’s no better time than now to move your enterprise workloads to the cloud or build new applications with Microsoft Azure.

If you have an ecosystem of IT resources who can continue to manage and maintain your application in the cloud, SQL Server on an Azure Virtual Machine is the ideal option—now with even more built-in productivity, power and scale. If you don’t have an ecosystem of IT resources or don’t want them maintaining and patching every database in your portfolio, the greater SQL Server compatibility, predictable Premium performance, built-in 99.99% availability and upcoming suite of security features make Azure SQL Database an ideal destination.

We’re excited to share these ongoing improvements of our SQL Server cloud offerings with you; helping make SQL Server on an Azure VM and Azure SQL Database two great migration or deployment targets for your enterprise-grade SQL Server workloads.

If you haven’t already, start a free trial on SQL Server in Virtual Machines or Azure SQL Database today!

Thursday, January 29, 2015 11:10:00 AM

In August, we announced the SQL Server AlwaysOn Offering in the Microsoft Azure Portal Gallery. This offering fully automates the configuration of a highly available SQL Server deployment on Azure Infrastructure Services using AlwaysOn Availability Groups.

Now, we have updated this offering with some exciting improvements. Namely, we have added support for using existing domains, and we have optimized the time it takes to deploy, so you save even more time than before!

AlwaysOn Availability Groups in Azure

SQL Server AlwaysOn technology provides the high availability capabilities needed for mission critical applications.  One of the main challenges of this technology is that it requires a complex and time-consuming setup. With the new SQL Server AlwaysOn Template in the Azure Portal, this process is greatly simplified, freeing up your valuable time and resources. After entering the desired settings in the Portal, the setup is automatically completed for you. With this setup you get an Availability Group with two SQL VMs, a listener configured to point to the current primary, a file share witness, failover cluster, and two domain controller VMs for a new or existing domain.

Existing Windows Domain

With this update, you can select between having a new domain created for this configuration and utilizing an existing domain you have pre-configured with all your specific requirements. You can use the new domain option to have a domain fully created and set up for you. This is a good option if you do not have very specific domain requirements, or do not have an existing domain you wish to utilize. You can use the existing domain option if you have very specific requirements or prefer to reuse a pre-configured domain from on-premises or in Azure. With this option selected, the SQL primary, secondary, and file share witness will be successfully added to your existing domain.

To use an existing domain, first select the correct existing Virtual Network for that domain, then select the existing domain with the user credentials, as shown in the screenshot below.

Execution Time Cut in Half

Going through the manual setup of an AlwaysOn Availability Group can take anywhere from 2-6 hours, depending on your level of expertise on the technology. When this feature was initially released, it decreased the amount of work to set this up to 1 minute, and had the configuration completely deployed after about 1.5 hours. Now we have decreased that even further, to around 45 minutes. This saves even more valuable time and allows you that much faster access to highly available SQL VMs to use for your critical business applications.

Thursday, January 29, 2015 11:05:00 AM

In an effort to provide an extra level of convenience, we are releasing two features that will simplify the effort to ensure the health of your SQL Virtual Machine and your data. These features, Automated Backup and Automated Patching, automate the processes of backing up and patching your SQL Virtual Machines. Incredibly easy to set up, these features require little input to manage. And these will just be the initial services to be automated.

These services will be available to you for configuring SQL VMs in Azure via Portal and PowerShell. Via PowerShell, you will be able to enable these services for new and existing SQL VMs. In the Azure Portal, you will be able to enable these services when provisioning new VMs.

Automated Backup

This service enables you to configure a backup schedule on your SQL Server 2014 Enterprise and Standard Virtual Machines in a very convenient manner while ensuring your data is backed up consistently and safely. Automated Backup is configured to backup all existing and new databases for the default instance of SQL Server. This simplifies the usual process of configuring Managed Backup for new databases and then for each existing database by combining it into one simple automated setup.

This feature is disabled by default, and once it is enabled, requires very little effort to configure. If you do not wish to change the default settings, no work is required beyond enabling the service. If you wish to customize the settings, you can specify the retention period, storage account, and whether you want encryption to be enabled. The retention period, as is standard for Managed Backup, can be anywhere between 1 and 30 days. The storage account defaults to the same storage account as the VM, but can be changed to any other storage account. This provides you with a DR option, allowing you to back up your databases to storage in another datacenter. If you decide to encrypt your backups, an encryption certificate will be generated and saved in the same storage account as the backups. In this scenario, you will also need to enter a password which will be used to protect the encryption certificates used for encrypting and decrypting your backups. This allows you to not worry about your backups beyond the configuration of this feature, and also ensures you can trust that your backups are secure.

You can see a screenshot of what you will see in the Azure Portal here:

Automated Patching

Many customers told us that they would like to move their patching schedules off business hours. This feature enables you to do exactly this – define the maintenance window that would keep your patch installs in the range you have specified.

When you look on the settings available for the Automated Patching you could find you are familiar with those, because they mimic settings available from the Windows Update Agent (service that drives patching of your Windows machine). Settings are simple and powerful at the same time. All that you need to define to make sure patches are applied when you want is: day of the week, start of the maintenance window, and duration of the maintenance window. It relies on the Windows Update and the Microsoft Update infrastructure and installs any update that matches the ‘Important’ category for the machine.

This feature allows you to patch your Azure Virtual Machines in effective and predictable way even when those VMs are not joined to any domain and not controlled by any patching infrastructure.

There are a number of ways how you can configure Automated Patching, but the easiest way is to use new Azure Portal, you can see how the configuration screen can look like on the screenshot below.

SQL Server IaaS Agent

Both features are part of the new component that will be installed on the VM when features are enabled and this component is called SQL Server IaaS Agent. It is built in the form of Azure VM Extension meaning all the Azure VM Extension concepts are applicable making it perfect tool for the management of SQL in Azure VMs on scale. You can push this IaaS Agent to a number of VMs at once, you can configure, and you can remove or disable it as well.

This IaaS Agent moves SQL Server one step closer to be the best application to run in Azure VMs.

 

Try these features out for yourself at https://portal.azure.com.

For further details, here is the documentation page for these features.

Tuesday, January 20, 2015 2:32:00 PM

Interested in growing your BI and Big Data skills in 2015? Maybe your new year’s resolution is all about learning something new or taking your analytics knowledge to the next level?

See what BI and Big Data training courses were your peers’ favorites in 2014:

And last but not least, check out the brand new Big Data with the Microsoft Analytics Platform Services.

Learn more about Microsoft's big data solutions or find training opportunities on the Microsoft Virtual Academy.

Monday, December 22, 2014 10:00:00 AM

You can’t read the tech press without seeing news of exciting advancements or opportunities in data science and advanced analytics. We sat down with two of our own Microsoft Data Scientists to learn more about their role in the field, some of the real-world successes they’ve seen, and get their perspective on today’s opportunities in these evolving areas of data analytics.

If you want to learn more about predictive analytics in the cloud or hear more from Val and Wee Hyong, check out their new book, Predictive Analytics with Microsoft Azure Machine Learning: Build and Deploy Actionable Solutions in Minutes.

First, tell us about your roles at Microsoft?

 [Val] Principal Data Scientist in the Data and Decision Sciences Group (DDSG) at Microsoft

 [Wee Hyong] Senior Program Manager, Azure Data Factory team at Microsoft

 And how did you get here? What’s your background in data science?

[Val] I started in data science over 20 years ago when I did a PhD in Artificial Intelligence. I used Artificial Neural Networks to solve challenging engineering problems, such as the measurement of fluid velocities and heat transfer. After my PhD, I applied data mining in the environmental science and credit industry: I did a year’s post-doctoral fellowship before joining Equifax as a New Technology Consultant in their London office. There, I pioneered the application of data mining to risk assessment and marketing in the consumer credit industry. I hand coded over ten machine learning algorithms, including neural networks, genetic algorithms, and Bayesian belief networks in C++ and applied them to fraud detection, predicting risk of default, and customer segmentation.    

[Wee Hyong] I’ve worked on database systems for over 10 years, from academia to industry.  I joined Microsoft after I completed my PhD in Data Streaming Systems. When I started, I worked on shaping the SSIS server from concept to release in SQL Server 2012. I have been super passionate about data science before joining Microsoft. Prior to joining Microsoft, I wrote code on integrating association rule mining into a relational database management system, which allows users to combine association rule mining queries with SQL queries. I was a SQL Server Most Valuable Professional (MVP), where I was running data mining boot camps for IT professionals in Southeast Asia, and showed how to transform raw data into insights using data mining capabilities in Analysis Services.

What are the common challenges you see with people, companies, or other organizations who are building out their data science skills and practices?

[Val] The first challenge is finding the right talent. Many of the executives we talk to are keen to form their own data science teams but may not know where to start. First, they are not clear what skills to hire – should they hire PhDs in math, statistics, computer science or other? Should the data scientist also have strong programming skills? If so, in what programming languages? What domain knowledge is required? We have learned that data science is a team sport, because it spans so many disciplines including math, statistics, computer science, etc. Hence it is hard to find all the requisite skills in a single person. So you need to hire people with complementary skills across these disciplines to build a complete team.

The next challenge arises once there is a data science team in place – what’s the best way to organize this team? Should the team be centralized or decentralized? Where should it sit relative to the BI team? Should data scientists be part of the BI team or separate? In our experience at Microsoft, we recommend having a hybrid model with a centralized team of data scientists, plus additional data scientists embedded in the business units. Through the embedded data scientists, the team can build good domain knowledge in specific lines of business. In addition, the central team allows them to share knowledge and best practices easily. Our experience also shows that it is better to have the data science team separate from the BI team. The BI team can focus on descriptive and diagnostic analysis, while the data science team focuses on predictive and prescriptive analysis. Together they will span the full continuum of analytics.

The last major challenge I often hear about is the actual practice of deploying models in production. Once a model is built, it takes time and effort to deploy it in production. Today many organizations rewrite the models to run on their production environments. We’ve found success using Azure Machine Learning, as it simplifies this process significantly and allows you to deploy models to run as web services that can be invoked from any device.

[Wee Hyong] I also hear about challenges in identifying tools and resource to help build these data science skills. There are a significant number of online and printed resources that provide a wide spectrum of data science topics – from theoretical foundations for machine learning, to practical applications of machine learning. One of the challenges is trying to navigate amongst the sea of resources, and selecting the right resources that can be used to help them begin.

Another challenge I have seen often is identifying and figuring out the right set of tools that can be used to model the predictive analytics scenario. Once they have figured out the right set of tools to use, it is equally important for people/companies to be able to easily operationalize the predictive analytics solutions that they have built to create new value for their organization.

What is your favorite data science success story?

[Val] My two favorite projects are the predictive analytics projects for ThyssenKrupp and Pier 1 Imports. I’ll speak today about the Pier 1 project. Last spring my team worked with Pier 1 Imports and their partner, MAX451, to improve cross-selling and upselling with predictive analytics. We built models that predict the next logical product category once a customer makes a purchase. Based on Azure Machine Learning, this solution will lead to a much better experience for Pier 1 customers.

[Wee Hyong] One of my favorite data science success story is how OSIsoft collaborated with the Carnegie Mellon University (CMU) Center for Building Performance and Diagnostics to build an end-to-end solution that addresses several predictive analytics scenarios. With predictive analytics, they were able to solve many of their business challenges ranging from predicting energy consumption in different buildings to fault detection. The team was able to effectively operationalize the machine learning models that are built using Azure Machine Learning, which led to better energy utilization in the buildings at CMU.

What advice would you give to developers looking to grow their data science skills?
[Val] I would highly recommend learning multiple subjects: statistics, machine learning, and data visualization. Statistics is a critical skill for data scientists that offers a good grounding in correct data analysis and interpretation. With good statistical skills we learn best practices that help us avoid pitfalls and wrong interpretation of data. This is critical because it is too easy to unwittingly draw the wrong conclusions from data. Statistics provides the tools to avoid this. Machine learning is a critical data science skill that offers great techniques and algorithms for data pre-processing and modeling. And last, data visualization is a very important way to share the results of analysis. A good picture is worth a thousand words – the right chart can help to translate the results of complex modeling into your stakeholder’s language. So it is an important skill for a budding data scientist.

[Wee Hyong] Be obsessed with data, and acquire a good understanding of the problems that can be solved by the different algorithms in the data science toolbox. It is a good exercise to jumpstart by modeling a business problem in your organization where predictive analytics can help to create value. You might not get it right in the first try, but it’s OK. Keep iterating and figuring out how you can improve the quality of the model. Over time, you will see that these early experiences help build up your data science skills.

Besides your own book, what else are you reading to help sharpen your data science skills?

[Val] I am reading the following books:

  • Data Mining and Business Analytics with R by Johannes Ledolter
  • Data Mining: Practical Machine Learning Tools and Techniques, Third Edition (The Morgan Kaufmann Series in Data Management Systems) by Ian H. Witten, Eibe Frank, and Mark A. Hall
  • Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie or Die by Eric Siegel

[Wee Hyong] I am reading the following books:

  • Super Crunchers: Why Thinking-By-Numbers Is the New Way to Be Smart by Ian Ayres
  • Competing on Analytics: The New Science of Winning by Thomas H. Davenport and Jeanne G. Harris.

Any closing thoughts?

[Val]  One of the things we share in the book is that, despite the current hype, data science is not new. In fact, the term data science has been around since 1960. That said, I believe we have many lessons and best practices to learn from other quantitative analytics professions, such as actuarial science. These include the value of peer reviews, the role of domain knowledge, etc. More on this later.

[Wee Hyong] One of the reasons that motivated us to write the book is we wanted to contribute back to the data science community, and have a good, concise data science resource that can help fellow data scientists get started with Azure Machine Learning. We hope you find it helpful. 

Wednesday, December 17, 2014 10:00:00 AM

When you put big data to work, results can be beautiful. Especially when those results are as impactful as saving lives. Here are four best practice examples of how big data is being used in healthcare to improve, and often save, lives.

Aerocrine improves asthma care with near-real-time data

Millions of asthma sufferers worldwide depend on Aerocrine monitoring devices to diagnose and treat their disease effectively. But those devices are sensitive to small changes in ambient environment. That’s why Aerocrine is using a cloud analytics solution to boost reliability. Read more.

Virginia Tech advances DNA sequencing with cloud big data solution

DNA sequencing analysis is a form of life sciences research that has the potential to lead to a wide range of medical and pharmaceutical breakthroughs. However, this type of analysis requires supercomputing resources and Big Data storage that many researchers lack. Working through a grant provided by the National Science Foundation in partnership with Microsoft, a team of computer scientists at Virginia Tech addressed this challenge by developing an on-demand, cloud-computing model using the Windows Azure HDInsight Service. By moving to an on-demand cloud computing model, researchers will now have easier, more cost-effective access to DNA sequencing tools and resources, which could lead to even faster, more exciting advancements in medical research. Read more.

The Grameen Foundation expands global humanitarian efforts with cloud BI

Global nonprofit Grameen Foundation is dedicated to helping as many impoverished people as possible, which means continually improving the way Grameen works. To do so, it needed an ongoing sense of its programs’ performance. Grameen and Microsoft brought people and technology together to create a BI solution that helps program managers and financial staff: glean insights in minutes, not hours; expand services to more people; and make the best use of the foundation’s funding. Read more.

Ascribe transforms healthcare with faster access to information

Ascribe, a leading provider of IT solutions for the healthcare industry, wanted to help clinicians identify trends and improve services by supplying faster access to information. However, exploding volumes of structured and unstructured data hindered insight. To solve the problem, Ascribe designed a hybrid-cloud solution with built-in business intelligence (BI) tools based on Microsoft SQL Server 2012 and Windows Azure. Now, clinicians can respond faster with self-service BI tools. Read more.

Learn more about Microsoft’s big data solutions

Tuesday, December 16, 2014 10:00:00 AM

This blog post was authored by: Matt Usher, Senior PM on the Microsoft Analytics Platform System (APS) team

Microsoft is happy to announce the release of the Analytics Platform System (APS) Appliance Update (AU) 3. APS is Microsoft’s big data in a box appliance for serving the needs of relational data warehouses at massive scale. With this release, the APS appliance supports new scenarios for utilizing Power BI modeling, visualization, and collaboration tools over on premise data sets. In addition, this release extends the PolyBase to allow customers to utilize the HDFS infrastructure in Hadoop for ORC files and directory modeling to more easily integrate non-relational data into their data insights.

The AU3 release includes:

  • PolyBase recursive Directory Traversal and ORC file format support
  • Integrated Data Management Gateway enables query from Power BI to on premise APS
  • TSQL compatibility improvements to reduce migration friction from SQL Server SMP
  • Replatformed to Windows Server 2012 R2 and SQL Server 2014

PolyBase Directory Traversal and ORC File Support

PolyBase is an integrated technology that allows customers to utilize the skillset that they have developed in TSQL for querying and managing data in Hadoop platforms. With the AU3 release, the APS team has augmented this technology with the ability to define an external table that targets a directory structure as a whole. This new ability unlocks a whole new set of scenarios for customers to utilize their existing investments in Hadoop as well as APS to provide greater insight into all of the data collected within their data systems. In addition, AU3 introduces full support for the Optimized Row Column (ORC) file format – a common storage mechanism for files within Hadoop.

As an example of this new capability, let’s examine a customer that is using APS to host inventory and Point of Sale (POS) data in an APS appliance while storing the web logs from their ecommerce site in a Hadoop path structure. With AU3, the customer can simply maintain a structure for their logs in Hadoop in a structure that is easy to construct such as year/month/date/server/log for simple storage and recovery within Hadoop that can then be exposed as a single table to analysts and data scientists for insights.

In this example, let’s assume that each of the Serverxx folders contains the log file for that server on that particular day. In order to surface the entire structure, we can construct an external table using the following definition:

CREATE EXTERNAL TABLE [dbo].[WebLogs]
(
	[Date] DATETIME NULL,
	[Uri] NVARCHAR(256) NULL,
	[Server] NVARCHAR(256) NULL,
	[Referrer] NVARCHAR(256) NULL
)
WITH
(
	LOCATION='//Logs/',
	DATA_SOURCE = Azure_DS,
	FILE_FORMAT = LogFileFormat,
	REJECT_TYPE = VALUE,
	REJECT_VALUE = 100
);

By setting the LOCATION targeted at the //Logs/ folder, the external table will pull data from all folders and files within the directory structure. In this case, a simple select of the data will return data from only the last 10 entries regardless of the log file that contains the data:

SELECT TOP 5
	*
FROM
	[dbo].[WebLogs]
ORDER BY
	[Date]

The results are:

Note: PolyBase, like Hadoop, will not return results from hidden folders or any file that begins with an underscore (_) or period(.).

Integrated Data Management Gateway

With the integration of the Microsoft Data Management Gateway into APS, customers now have a scale-out compute gateway for Azure cloud services to more effectively query sophisticated sets of on-premises data.  Power BI users can leverage PolyBase in APS to perform more complicated mash-ups of results from on-premises unstructured data sets in Hadoop distributions. By exposing the data from the APS Appliance as an OData feed, Power BI is able to easily and quickly consume the data for display to end users.

For more details, please look for an upcoming blog post on the Integrated Data Management Gateway.

TSQL Compatibility improvements

The AU3 release incorporates a set of TSQL improvements targeted at richer language support to improve the types of queries and procedures that can be written for APS. For AU3, the primary focus was on implementing full error handling within TSQL to allow customers to port existing applications to APS with minimal code change and to introduce full error handling to existing APS customers. Released in AU3 are the following keywords and constructs for handling errors:

In addition to the error handling components, the AU3 release also includes support for the XACT_STATE scalar function that is used to indicate the current running transaction state of a user request.

Replatformed to Windows Server 2012 R2 and SQL Server 2014

The AU3 release also marks the upgrade of the core fabric of the APS appliance to Windows Server 2012 R2 and SQL Server 2014. With the upgrade to the latest versions of Microsoft’s flagship server operating system and core relational database engine, the APS appliance takes advantage of the improved networking, storage and query execution components of these products. For example, the APS appliance now utilizes a virtualized Active Directory infrastructure which helps to reduce cost and increase domain reliability within the appliance helping to make APS the price/performance leader in the big data appliance space.

APS on the Web

To learn more about the Microsoft Analytics Platform System, please visit us on the web at http://www.microsoft.com/aps

Tuesday, December 16, 2014 9:30:00 AM

As the end of 2014 nears, now is the perfect time to review IT infrastructure plans for the coming year.  If you haven’t made supportability a key initiative for 2015, there are some important dates that you should know about:

After the end of extended support security updates will no longer be available for these products.  Staying ahead of these support dates will help achieve regulatory compliance and mitigate potential future security risks. That means SQL Server 2005 users, especially those running databases on Windows Server 2003, should make upgrading the data platform an IT priority. 

Security isn’t the only reason to think about upgrading. Here are six benefits to upgrading and migrating your SQL Server 2005 databases before the end of extended support:

  1. Maintain compliance – It will become harder to prove compliance with the latest regulations such as the upcoming PCI DSS 3.0. Protect your data and stay on top of regulatory compliance and internal security audits by running an upgraded version of SQL Server.
  2. Achieve breakthrough performance – Per industry benchmarks, SQL Server 2014 delivers 13x performance gains relative to SQL Server 2005 and 5.5x performance gains over SQL Server 2008.  Customers using SQL Server 2014 can further accelerate mission critical applications with up to 30x transaction performance gains with our new in-memory OLTP engine and accelerate queries up to 100x with our in-memory columnstore. 
  3. Virtualize and consolidate with Windows Server – Scale up on-premises or scale-out via private cloud with Windows Server 2012 R2. Reduce costs by consolidating more database workloads on fewer servers, and increase agility using the same virtualization platform on-premises and in the cloud.
  4. Reduce TCO and increase availability with Microsoft AzureAzure Virtual Machines can help you reduce the total cost of ownership of deployment, management, and maintenance of your enterprise database applications. And, it’s easier than ever to upgrade your applications and achieve high availability in the cloud using pre-configured templates in Azure.
  5. Use our easy on-ramp to cloud for web applications – The new preview of Microsoft Azure SQL Database announced last week has enhanced compatibility with SQL Server that makes it easier than ever to migrate from SQL Server 2005 to Microsoft Azure SQL Database. Microsoft’s enterprise-strength cloud brings global scale and near zero maintenance to database-as-a-service, and enables you to scale out your application on demand.
  6. Get more from your data platform investments - Upgrading and migrating your databases doesn’t have to be painful or expensive. A Forrester Total Economic ImpactTM of Microsoft SQL Server study found a payback period of just 9.5 months for moving to SQL Server 2012 or 2014.

Here are some additional resources to help with your upgrade or migration:

Monday, December 15, 2014 10:00:00 AM

As part of SQL Server’s ongoing interoperability program, we are pleased to announce the general availability of two SQL Server drivers: the Microsoft JDBC Driver for SQL Server and the SQL Server Driver for PHP are now available!

Both drivers provide that robust data access to Microsoft SQL Server and Microsoft Azure SQL Database. The JDBC Driver for SQL Server is a Java Database Connectivity (JDBC) type 4 driver supporting Java Development Kit (JDK) version 1.7. The PHP driver will allow developers who use the PHP scripting language version 5.5 to access Microsoft SQL Server and Microsoft Azure SQL Database, and to take advantage of new features implemented in ODBC 

You can download the JDBC driver here, and download the PHP driver hereWe invite you to explore the latest the Microsoft Data Platform has to offer via a trial evaluation of Microsoft SQL Server 2014, or by trying the new preview of Microsoft Azure SQL Database.

Site Map | Printable View | © 2008 - 2015 SQLTrainer.com, LLC | Powered by mojoPortal | HTML 5 | CSS | Design by mitchinson