New features of SQL Server 2014


In the last edition of TechEd USA (May 2013), Microsoft announced what the future SQL Server 2014 would be like, and since then more information and preliminary test versions have continued to be released so that we can all be preparing little by little.

The final version is here! it has been launched on April 1st and it will become available to the general public on the 15th day of the same month.

With SQL Server 2014, Microsoft is focusing on everything that has to do with performance, scalability, integration with the cloud and Big Data management features.

Although these are the main improvement areas, the new version actually includes hundreds of little enhancements, performance tunings and solutions for small bugs. On MSDN we can find a comprehensive list of new features, grouped into areas, but here I will highlight the most important ones.

First we have “Hekaton”, the impressive OLTP engine developed together with Microsoft Research. It is a new engine for the data manager that uses especially optimized tables to reside in memory, without the typical constraints related to data management on disk, which makes performance gains spectacular. Microsoft is speaking of up to 30 times faster applications when they are designed to make use of Hekaton, and an average x10 speed gains in other applications (here you have a specific PDF on the subject). In fact, these in-memory tables can be marked as persistent (and saved to disk) or “schema only duration”, in which case only their definition is saved and they are ideal for temporary heavy tasks, such as transformations, data load, temporary tables, cache, etc. The use of SSD disks to increase the memory available for Hekaton is also allowed.


Secondly, the acclaimed high availability “AlwaysOn” feature introduced in SQL Server 2012 has evolved and now adds new functionalities such as support for up to 8 secondary replications (before they used to be 4) which work for reading even in the event of network failures, and the possibility of using shared storage to improve resilience to failures. There are also improvements in the time needed for certain complex maintenances (such as rebuilding partition indexes) which will make databases available for longer.

Other features that we can highlight are:

  • A much improved query optimizer. It’s the data engine component in charge of creating and optimizing query plans.
  • Mixed environments on the cloud made easy—you can have the transaction log and data stored in a Windows Azure Storage account but having the transactions processed on your local servers. Also, the data can be encrypted in Azure, but the encryption keys are stored on your local machine for greater security and privacy.
  • New security permissions for users and roles—connecting to future databases, impersonating any other user, being able to alter any database and performing SELECTs on any database (without writing).
  • Greater control over resource isolation, including the ability to set the maximum and minimum number of input/output operations (IOPS) for each storage volume used.
  • Delayed durability transactions. In order to reduce latency, transactions can be defined in this way and thus they will return control to the client before the corresponding registry is written into the Log.
  • Improvements in the free tool for backups to Azure which facilitate performing them fromSQL Server 2005 onwards. We can also save and retrieve a backup directly from a mere URL.
  • Encryption of backups, both in locally made copies and those sent to Azure.

You can find detailed information, with datasheets, whitepapers and technical presentations in the Microsoft SQL Server 2014 CTP2 Product Guide.


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.