SQL Server Enterprise 2014 on cloud

1-click AWS Deployment 1-click Azure Deployment 1-click Google Deployment

Overview

SQL Server 2014 brings revolutionary performance to quicken business and empower new, transformational scenarios that make you more viable. Microsoft offers complete in-memory technologies for OLTP, data warehousing, and analytics built directly into SQL Server, so there is no need to buy specialized hardware, expensive add-ons or learn new skills to speed your transactions, your queries and your understandings.SQL Server 2014 also provides new hybrid solutions for cloud backup and disaster recovery as well as takes advantage of new capabilities in Windows Server 2012 R2 to provide enterprise-class availability and scalability with predictable performance and reduced infrastructure costs. SQL Server 2014 also continues to offer industry leading business intelligence capabilities and integration with familiar tools like Excel for faster insights on your data. SQL Server 2014 is the next generation of Microsoft’s information platform, with new features that deliver faster performance, expand capabilities in the cloud, and provide powerful business insights. In this book, we explain how SQL Server 2014 incorporates in-memory technology to boost performance in online transactional processing (OLTP) and data-warehouse solutions. We also describe how it eases the transition from on-premises solutions to the cloud with added support for hybrid environments. SQL Server 2014 continues to include components that support analysis, although no major new features for business intelligence were included in this release. However, several advances of note have been made in related technologies such as Microsoft Excel 2013, Power BI for Office 365, HDInsight, and PolyBase.

SQL Server 2014 is very much the familiar SQL Server. It uses the same familiar management tools, the same T-SQL language, and the same APIs that connect it to your applications. That means you should be able to upgrade existing databases in place, to take advantage of its performance and scaling advances. Also, Microsoft has been looking at the ways we use data in modern applications, and added new features that should intensely improve performance and that also bring on-premises databases and the cloud closer together.

SQL Server 2014 connection with Azure:

Azure cloud platform mixes its own SQL Azure database service with SQL Server running on virtual machines as part of its IaaS (Infrastructure-as-a-Service) offering. Although SQL Server 2014 is an awesome  application, rather than a service, it’s been designed to take advantage of the cloud, using Azure’s storage and IaaS capabilities to give businesses of all sizes access to cloud-hosted disaster recovery. Large databases can mean expensive, and often slow, backups. Using Azure as a subscription-based backup, there’s no need for CAPEX, and you can use your existing backup techniques — just with Azure as a target. It’s arguably more secure than a traditional backup: Azure holds three copies of your data, so it’s always available. Getting started can take time, so Azure offers the option of letting you make your initial backup on a local disk, that’s then mailed to Microsoft and stored in Azure, ready for the rest of your backups over the wire. Backups can be encrypted, and there’s even support for older versions of SQL Server. Managed backup tools automate the process. All you need to do is define the Azure account you’re using and a retention period. SQL Server will then backup logs every 5MB, every day, or 1GB. If you accidently delete a log backup, the system will detect that you no longer have a consistent backup chain, and will take a full backup. Azure and SQL Server can also be used as a disaster recovery (DR) solution, with an Azure IaaS SQL Server designated as an always-on replica. As soon as an on-premises server fails, you’re switched to a cloud-hosted SQL Server instance preloaded with the last backup. It’s not the cheapest approach, but it does mean you don’t need to invest in running your own DR site. You can use any Azure region, and all you need to pay for is the IaaS VM and the storage you need. The backup tools validate the environment, and handle failures.

One cheaper option is to use SQL Server’s Azure cloud backup as the basis of a cold-start DR service. Hosting a suspended SQL Server instance on Azure IaaS (which only costs you when your server runs), you can use your cloud backup data to update the databases associated with your cloud DR server, bringing you back online after a failure. It’s not as fast as a failover onto an always-running DR server, but it’s an economical approach that will work well for smaller businesses.

With hybrid cloud scenarios in mind, there’s also tooling that will migrate a SQL Server database from an on-premises server to a virtual machine running on Azure. It’s not just for SQL Server 2014, either, as the wizard will migrate SQL Server 2008, 2008 R2 and 2012, with support for VMs running SQL Server 2012 and 2014. It’s an approach that makes it easier to handle database migrations, or to use Azure as a development platform for new applications — or, of course, to move from on-premises to cloud.

Deploying SQL Server 2014 in Azure is abridged by Microsoft providing VM images with SQL Server already installed. We just have to pick the image we want, deploy it, and we are done. Once it’s instantiated you can open SQL Server 2014’s Management Studio, and use the Deploy Database to Windows Azure VM option to launch the wizard. Connect to the remote server, sign in to Azure, publish to a database in your VM, and (once the data has uploaded) away you go.

SQL Server 2014 provides a new in-memory capability for tables that can fit entirely in memory Whilst small tables may be entirely resident in memory in all versions of SQL Server, they also may reside on disk, so work is involved in reserving RAM, writing evicted pages to disk, loading new pages from disk, locking the pages in RAM while they are being operated on, and many other tasks. By treating a table as guaranteed to be entirely resident in memory much of the ‘plumbing’ of disk-based databases can be avoided. For disk-based SQL Server applications, it also provides the SSD Buffer Pool Extension, which can improve performance by cache between RAM and spinning media.SQL Server 2014 also enhances the Always On (HADR) solution by increasing the readable secondaries count and sustaining read operations upon secondary-primary disconnections, and it provides new hybrid disaster recovery and backup solutions with Microsoft Azure, enabling customers to use existing skills with the on-premises version of SQL Server to take advantage of Microsoft’s global datacenters. In addition, it takes advantage of new Windows Server 2012 and Windows Server 2012 R2 capabilities for database application scalability in a physical or virtual environment.Microsoft delivers three versions of SQL Server 2014 for downloading: the one that runs on Microsoft Azure, the SQL Server 2014 CAB, and SQL Server 2014 ISO. SQL Server 2014 is the last version available on x86/IA32 architecture.

SQL Server 2014 In-Memory OLTP Architecture and Data Storage

  1. Built-in to SQL Server for a hybrid and integrated experience – The idea behind this principal was to integrate the In-memory OLTP engine in the main SQL Server engine.  Users will have an hybrid architecture combining the best of both of architectures allowing users to create both traditional disk based tables as well as the newly introduced memory optimized tables in the same database. This integrated approach gives users the same manageability, administration and development experience. Further it allows for the writing of integrated queries combining both disk based tables as well as memory optimized tables. Finally the integration allows users to have integrated high availability (i.e. AlwaysOn) and backup\restore options.
  2. Optimize for main memory data access – As we are witnessing the steady decline in memory prices, the second principal was to leverage large amounts of memory as much as possible.  By storing data and index structure of the memory optimized tables in memory only, the memory optimized tables don’t use the buffer pool or B-tree structure for indexes. This new architecture uses stream based storage for data persistence on disk.
  3. Accelerate business logic processing – The idea behind this principal is to compile T-SQL to machine\native code via a C code generator and use aggressive optimization during compile time with the Microsoft’s Visual C/C++ compiler. Once compiled, invoking a procedure is just a DDL entry-point and it achieves much of the performance by executing compiled stored procedures. Remember, the cost of interpreted T-SQL stored procedures and their query plans is also quite significant. For an OLTP workload the primary goal is to support efficient execution of compile-once-and-execute-many-times workloads as opposed to optimizing the execution of ad hoc queries.
  4. Provide frictionless scale-up – The idea behind this principal was to provide high concurrency to support a multi-user environment without blocking (remember blocking implies context switches which are very expensive) and better utilize the multi-core processors. Memory optimized tables use multiversion optimistic concurrency control with full ACID support and it does not require a lock manager because it uses a lock\latch\spinlock free algorithm. This principle is based around optimistic concurrency control where the engine is optimized around transactions that don’t conflict with each other. Multi-versioning eliminates many potential conflicts between transactions and the rest (and very few of them) are handled by rolling back one of the conflicting transactions rather than blocking.

If you look at the below architecture, you will notice there are certain areas which remain same for accessing memory optimized tables vs. disk based tables.  For example the client connects to the TDS handler and Session Management module irrespective of whether he\she wants to access a memory optimized table or disk based table.  The same is true when call a natively compiled stored procedures or interpreted T-SQL stored procedure. And then there are certain changes about how memory optimized tables are created, stored, managed and accessed.  Further, Query Interop allows interpreted T-SQL queries and stored procedures to access memory optimized tables.

Query Interop allows interpreted TSQL query\stored procedure to access memory optimized table. 

The In-memory native compiler takes an abstract tree representation of a T-SQL stored procedure defined as a native stored procedure.  This representation includes the queries within the stored procedure, in addition to the table and index metadata then compiles the procedure into native\machine code. This code is designed to execute against tables and indexes managed by the storage engine for memory optimized tables.

The In-memory storage engine manages user data and indexes of the memory optimized tables. It provides transactional operations on tables of records, hash and range indexes on the tables, and base mechanisms for storage, check pointing, recovery and high-availability. For a durable (more on durability in next part of this tip series) table, it writes data to filestream based filegroup on disk so that it can recover in case a server crashes. It also logs its updates to the SQL Server database transaction log along with the log records for disk based tables. It uses SQL Server file streams for storing checkpoint files which are used, to recover memory optimized tables automatically when a database is recovered or restored.

Other than these items, as I said In-memory OLTP is fully integrated into the existing SQL Server database engine. Hekaton leverages a number of services already available in SQL Server, for example:

  • Meta data about memory optimized tables are stored in the SQL Server catalogs
  • A transaction can access and update data in both memory optimized and disk based tables
  • If you have setup a AlwaysOn availability group then memory optimized tables will fail over in the same manner as disk based tables

SQL Server 2014 Data Storage

Memory optimized tables have a completely new row format for storage of row data in memory and the structure of the row is optimized for memory residency and access. Unlike, disk based tables for which data gets stored in the data or index pages, memory optimized tables do not have any page containers. As I said before, In-memory OLTP uses multi-version for data changes, which means the payload does not get updated in place, but rather rows are versioned on each data change.

In-memory OLTP uses multi-version for changes, it means payload does not get updated in place ever but rather rows are versioned on each change.

 

With Hekaton, every row has a begin timestamp and an end timestamp, which determines the row’s version, validity and visibility. The begin timestamp indicates the transaction that inserted the row whereas the end timestamp indicates the transaction which actually deleted the row. A value of infinity for the end timestamp indicates the row has not been deleted and it is the latest version. Updating a row is a combination of deleting an existing record and inserting a new record. For a read operation – only record versions whose valid time overlaps the read time will be visible to the read, all other versions are ignored. Different version of the record will always have non-overlapping time validity and hence at most only one version of the record will be visible to the read operation. Having said that, the begin timestamp and end timestamp values determine, which other transactions will be able to see this row.

 

Keep in mind, all the memory optimized tables are fully stored in memory and hence you need to make sure you have enough memory available. One recommendation is 2 times the data size.  You will also need to verify you have enough memory for buffer pools for your disk based tables. If you don’t have enough memory then your transactions will start failing or if you are recovering\restoring, this operation will fail if you don’t have enough space to accommodate all the data of the memory optimized tables. Please note, you don’t need physical memory to accommodate the entire database, which could be in terabytes, but you need to accommodate all your memory optimized tables, which should not be more than 256 GB in size.

Preparation

 

For local installations, you must run Setup as an administrator. If you install SQL Server from a remote share, you must use a domain account that has read and execute permissions on the remote share.

It is best to install the server from a DVD or from a mounted ISO file which behaves as a CD/DVD. In this case, once you double click the DISK icon the autorun.inf file is used by the AutoRun and AutoPlay components of Microsoft Windows operating systems. This will start the installation using the configuration in the autorun.inf file. If  AutoRun is disabled on your server, you may double click setup.exe to start the installation. To install from a network share, locate the proper folder on the share, and then double-click setup.exe.

Installation, Step By Step

1. Upon starting the installation, you will get this window:

 

2. Choose the Installation section in order to start the installation:

 

3. Enter the product key and click Next:

 

4. You have to accept the license term in order to continue.

 

5. Mark the check box in order to use Microsoft Update to check for an update after you finished the installation, and then click Next.

 

6. The SQL Server installation program checks your machine to make sure it meets the hardware and software requirements to install SQL Server. If you get any Errors in the results, please use the link in the error message to get more information.

Do not move to the next step if you get any errors in this report!

 

7. Choose the first option in order to install SQL Server, and then select which features to install. We are going to choose our features manually in the next step. Click Next to continue:

 

8. In this step, you can choose what features to install. It is highly recommend NOT to choose all the features on a production server if you don’t need them. Choose only the features that you need or might need. Conversely you might want to select all using a development server in order to give you more flexibility in your development environment. Once you choose to use another feature you will be able to add it in production later on.

 

9. SQL Server allows you to install multiple instances of the same version or different versions of the server. Make sure you choose a name that will help you in the future to recognize the instance which you are working with. Theoretically you can use any instance name that you want.

Remember the name of the instance!

You will need to use this name every time that you want to connect to the server.

 

10.  Security is important for every product and every business. By following simple best practices, you can avoid many security vulnerabilities. SQL Server works as a group of services in the background. In this step you can select the service accounts for the SQL Server actions. Each service in SQL Server represents a process or a set of processes to manage authentication of SQL Server operations with Windows. Each service can be configured to use its own service account.

 

11. SQL Server supports several collations. A collation encodes the rules governing the proper use of characters for either a language, such as Hebrew or Polish, or an alphabet, such as Latin1_General (the Latin alphabet used by western European languages). Typically, you should choose a SQL Server collation that supports most languages commonly used by users at your organization. Select the collation and press next.

 

12. SQL Server can operate in one of two security (authentication) modes: (a) Windows Authentication mode which allowed a user to connect through an operating system user account. or (b) Mixed Mode which allowed users to connect to an instance of SQL Server using either Windows Authentication or SQL Server Authentication.

 

13. By default SQL Server uses the system operation disk  Thus it is highly recommended to use the best practice according to your specific system and usage. Separating the LOG files from the data files can improve performance significantly. The system database tempDB is used extensively by SQL Server. This database is rebuilt each time the server is restarted. (See this article for details about TempDB  Understanding Logging in Tempdb. Is Tempdb Recreated or Rebuilt after SQL Server Starts?)  It is highly recommended to use a fast disk for this database. It is best practice to separate data, transaction logs, and tempdb for environments where you can guarantee that separation. There are important points to considerate and this article is not covering them at the moment. For small systems you can use the default configuration and later on change as needed.

 

14. FILESTREAM enables SQL Server-based applications to store unstructured data, such as documents and images, on the file system. FILESTREAM integrates the SQL Server Database Engine with an NTFS file system by storing varbinary(max) binary large object (BLOB) data as files on the file system.

 

15. An instance of Analysis Services is a copy of the msmdsrv.exe executable that runs as an operating system service. Each instance is fully independent of other instances on the same server, having its own configuration settings, permissions, ports, startup accounts, file storage, and server mode properties. Server mode is a server property that determines which storage and memory architecture is used for that instance. In this step you can add windows users in order to give them administrative permissions for the Analysis Service. It is highly recommended to add the machine Administrator as well as other users who will need to manage this service. Move to the Data Directories for next step.

 

16. As mentioned in step 13 the data directories can have significant influence on the server performance.  Press Next to continue the installation configuration.

 

17. Choose if you want to install and configure the Reporting Service or just Install, which mean you will need to configure it later on.

 

18. Before you install and use the Microsoft SQL Server Distributed Replay feature, you should review the important security information .

 

19. When installing the Microsoft SQL Server Distributed Replay features, consider the following:

  • You can install the administration tool on the same computer as the Distributed Replay controller, or on different computers.
  • There can only be one controller in each Distributed Replay environment.
  • You can install the client service on up to 16 (physical or virtual) computers.
  • Only one instance of the client service can be installed on the Distributed Replay controller computer. If your Distributed Replay environment will have more than one client, we do not recommend installing the client service on the same computer as the controller. Doing so may decrease the overall speed of the distributed replay.
  • For performance testing scenarios, we do not recommend installing the administration tool, Distributed Replay controller service, or client service on the target instance of SQL Server. Installing all of these features on the target server should be limited to functional testing for application compatibility.
  • After installation, the controller service, SQL Server Distributed Replay controller, must be running before you start the Distributed Replay client service on the clients.
  • Make sure that the computers that you want to use meet the requirements that are described in the topic Distributed Replay Requirements.
  • Before you begin this procedure, you create the domain user accounts that the controller and client services will run under. We recommend that these accounts are not members of the Windows Administrators group. For more information, see the User and Service Accounts section in the Distributed Replay Security topic.

 

20.  The SQL Server Installation program will show us a report of all our configuration (steps 1-19). Once you select Next, the installation will start.

 

If  everything went well and you should get a final report which indicates the successful completion of each installed service. You are now ready to connect to the server .

Connecting to the Server

Open the new SSMS application which we have just installed (in step 8 we choose which features to install, If you have chosen Management Tools it includes the SQL Server Management Studio).

 

During step 9 we choose an instance name for our new installation. We told you to Remember the name of the instance. Now it is time to use it.

–SQL Server Enterprise 2014 is the whole kit and caboodle. Every feature of SQL Server is included in the enterprise edition. Some features of SQL server 2014 are only available in enterprise edition, such as always on instance clustering and always on availability groups.

SQL Server 2014 Enterprise edition on cloud is designed for today’s most demanding enterprise applications with built in mission-critical capabilities to ensure applications and data are up and available, scalable and secure.

Cognosys Provides Hardened images of SQL Server Enterprise 2014  on the cloud ( SQL Server Enterprise 2014  on AWS marketplace, SQL Server Enterprise 2014  on Azure and SQL Server Enterprise 2014  on Google Cloud Platform).

Click on the respective cloud provider tab for technical information.

Deploy SQL Server 2014 Enterprise  securely on cloud i.e. AWS marketplaceAzure and Google Cloud Platform (GCP)

SQL Server 2014 Enterprise Edition with IIS On Win 2012 R2

SQL Server Enterprise 2014 on cloud for AWS

Features

AWS

Google

Videos

SQL Server Enterprise 2014 on cloud

Related Posts