0

Database Compatibility Levels – Introduction

Hello friends, let’s have a little idea about database compatibility levels and impact. I’ve phrased few common questions and answered those. Let’s have a look into those and feel free to ask in case of any confusion.

What is Database Compatibility Level?

Database compatibility level is just a number which indicates what all features your database still supports. This is used for backward compatibility. The compatibility level of a database dictates how certain language elements of the database function as it relates to an earlier version of SQL Server.  In a nutshell, this offers up partial “backward compatibility” to an earlier version. 

If you are having a database which is designed in Sql server 2000. In this case, compatibility level of your database will be 80.

Now your organization want to upgrade to Sql server 2005. You upgraded Sql server version and restored your old database to this newer version.

Perhaps you didn’t notice that there are few features which are there in Sql server 2000. You are using those features and now Sql server 2005 is not having those features. You may expect that those DB object will not work which are using old features. But you are wrong. Those features still work in newer version of Sql server. But how?

This is because of compatibility level. When you restored your database to newer Sql version then compatibility level of your database is still 80. This indicates to Sql server 2005 that there are still few features of Sql server 2000 which you may be using so those should not break.

So things will not break immediately. You have some time to find out these old features and replace those with new ones.

You can check compatibility level of your database from management studio.

Right click on your database and click on properties. Go to options tab and you’ll be able to see Compatibility Level.

In addition to this, you can check compatibility level of all databases using below query.

SELECT name, compatibility_level FROM sys.databases;

 

Who sets Compatibility Level?

Database compatibility level is initially set by Sql server. Following are few scenarios to understand this.

  1. You are creating a new database in Sql server 2012, then compatibility level of this database will be 110.
  2. If you are having a database which is created in Sql 2000 and now you are restoring this backup into Sql 2005 then compatibility level of this database will be 80.
  3. If you are having a database which is created in Sql 2000 and you created scripts for all DB objects. You created a new database in Sql server 2005 and run all scripts to make it replica of your old database. Then this new database will have compatibility level 90. Note: In this case your old features will not work as your database is having compatibility level according to Sql 2005.

Below is the chart of default compatibility levels of Sql server versions.

database compatibility levels

What is the Impact of this?

Let’s see this with an example. You have a database which is designed in Sql server 2000. So the compatibility of this database would be 80. Now suppose that your database is having few queries, stored procedures which are having non-ANSI joins.

select a.name,a.empno,b.loc
from tab a, tab b
where a.deptno *= b.deptno;

From above query you can see that we are using *= in query. This is kind of Left Outer Join which is supported in Sql version 2000 and lower.

Now you upgraded your Sql server to 2005 and restored your database. When you run your stored procedure which is having non-ANSI join, it’ll work fine because of compatibility level. Your database compatibility level is still 80.

If you change your database compatibility level to 90 then you may break things. So make sure that you replace all old features with new ones before upgrading compatibility level of your database.

Can I change it?

Of course, you can change it but now you understand impact of this. If you are using old features and changing compatibility level to new then you can break few stored procedure or other Db objects. So before changing compatibility level, make sure you have replaced all old features with new one.

Here is an example of how to set the database compatibility level in T-SQL:

ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 120;

You can change it using management studio also. Just select compatibility level according to your case.

database compatibility levels change

With SQL Server 2014 and newer, the database compatibility level also controls whether the new cardinality estimator or the legacy cardinality estimator is used. The new cardinality estimator gives equivalent or better query performance for many to most queries, but it sometimes has very significant performance regressions. It is very important to do as much testing as possible before you change the compatibility level to 120 or higher.

Note:

  1. Don’t assume that this setting will let you restore the database backward to an older version. You cannot restore new database backup to older version. There are certain rule for downgrading which you can check here. Sql Server Downgrades-Rules & Restrictions
  1. Don’t think that compatibility level is same as database version. Both are different properties with different concept behind. The database version is an internal versioning system that defines what version of SQL Server the database was a recent resident of. Will cover more details about database version in upcoming articles.

Upgrade Advisor

We have an option to help with upgrades i.e. Upgrade Advisor. SQL Server Upgrade Advisor helps you prepare for upgrades to SQL Server 2014. Upgrade Advisor analyzes installed components from earlier versions of SQL Server, and then generates a report that identifies issues to fix either before or after you upgrade.

You can get more information about upgrade advisor here Upgrade Advisor

Feels free to experiment new things and connect to me.

0

Database Extensibility With Agile

Hi Friends, Today we are going to discuss about database extensibility and how this can be fit in business scenario for agile development.

What is Database Extensibility?

Database extensibility is the factor that how easy a database can adopt new changes without any major schema refactor or side effects.

If database schema is created in such a way that a small change in any table is causing you major changes in stored procedures, functions and client code to get data then this can be considered as a poor database design.

Database objects like tables, stored procedures, functions, views, triggers etc. should be written in such a way that it should be able to adopt new changes very easy with minimum or no code changes.

There are few guidelines to make database more extensible.

  • Tables should be normalized properly.
  • Generalization should be there to extend specific domain entities.
  • Data duplicity should not be there in multiple tables.
  • One table should hold specific domain entity requirement.
  • Do not use a table to hold non relevant data with various nulls in columns.

What is Agile Development Methodology?

Being developers we very well aware of waterfall method. Waterfall is a development methodology in which we have a major part of a project goes sequential. Following are the steps:

  1. We do requirement gathering of a major project or major part of any project.
  2. We prepare designs, get approvals.
  3. Code the entire solution in one go.
  4. Testing
  5. Control

These stages are sequential stages but the main problem is volume of work. However this methodology work fine but when it comes to larger projects then each stage takes significant amount of time and efforts. And when we achieve final implementation, till that time there could happen many business\requirement changes. These changes can create issues or again initiate a new waterfall process to tackle those changes.

In waterfall model, one has to wait till the completion of final code and product is visible.

To cater this issue, agile methodology came into picture. This is iterative approach having 15\21\30 days of sprints. Each sprint deliver a working code which can be demonstrated to client and any requirement changes can be incorporated at early stage of development.

This way one can save lots of rework and client\team can see work in progress and validate it at the same time against requirements.

Why we need database extensibility?

As we know that agile works through small iterations. As we cycle through iterations, an extensible database absorbs new business requirements with less refactoring.

If database design is poor then in further stages\iteration it will be messy to get everything fit into the database. So one should make decision at the earlier stages of designing and make database as much extensible.

Benefit of good schema

There are several benefits of good database design. Below are few.

Set Based Queries

SQL Server is designed to handle data in sets. SQL is a declarative language e.g. query describes the problem, and the Query Optimizer generates an execution plan to resolve the problem as a set.

Iterative T-SQL code is code that acts upon data one row at a time instead of as a set. The implementation generally happens via cursors. This forces the database engine to perform thousands of wasteful single-row operations, instead of handling the problem in one larger, more efficient set.

This is why set-based queries, based on an obvious physical schema, are so critical to database performance.

Indexing

A sound indexing strategy identifies a handful of queries that represent 90 percent of the workload. With judicious use of clustered indexes and covering indexes, solves the queries without expensive lookup operations.

An elegant physical schema, well-written set-based queries, and excellent indexing reduce transaction duration. And this implicitly improves concurrency and sets up the database for scalability.

Concurrency

A database with an excellent physical schema, well-written set-based queries, and the right set of indexes will have tight transactions hence perform well with multiple users.

When a poorly designed database displays symptoms of locking and blocking issues, transaction isolation level tuning only partially alleviates the problem. The sources of the concurrency issue are the long transactions and additional workload caused by the poor database schema. Concurrency tuning cannot overcome the deficiencies of a poor database design.

Scalability

The database component is the principle factor determining the overall monetary cost of the database. A well-designed database minimizes hardware costs. It simplifies data access code and maintenance jobs. It significantly lowers both the initial and the total cost of the database system.

Lower maintenance cost leads to scale systems easily.

You can read more about database design principals here.

We can see that a good database schema is the back bone of overall system. It can give countless direct\indirect benefits. While a poor designed database can create issues unexpectedly.

0

Database Design Principles

Hello friends. As we are in process of exploring good database architecture and concepts so it is very important to look into few Database Design Principles of a good database architecture. These database design principles are basis for any good database and should be kept in mind always while designing database systems.

Before going into principle part, let’s first discuss about what is information. Information is an valuable organizational asset and according to its value and scope, it must be organized, stored, secured, and should be readily available in a usable format for daily operations and analysis by individuals, groups, and processes, both today and in the future.

Everything is an information which is valuable to someone and can be used in productive or destructive ways. As there could be many dangerous impact of information if it is in wrong hands so it is very important to organize the information in secure way.

Looking into all above key attributes of information, let’s discuss now good information management system.

There are six main objectives which must be fulfilled effectively by a good database.

  1. Usability
  2. Extensibility
  3. Data Integrity
  4. Performance
  5. Availability
  6. Security

Let’s discuss a little in detail.

Usability

Any information which we are storing in any organization should be meaningful for that organization. If we are storing those factors which are actually not fit with organization’s requirement then this is just waste of resources.

Primary objective of any information system should be to meet organization requirements. Following are few points to consider while going to start an architecture.

  1. Properly get details about requirements.
  2. See how information can be fit with requirement.
  3. Trace requirement matrix to capture mapping of information architecture and requirements.
  4. Organize it simple.
  5. Decide upon format of data so that could be easily converted to meaningful representation.

Extensibility

As we know that everyday new business requirements come up and every day there is a need to change or enhance information system to capture new requirements. So information design should be extensible so that it can adopt new requirements without much efforts or without major breaking changes.

If your initial design is too much complex or unorganized then it may create trouble for you to adopt new things effectively.

Following are few points to consider when thinking of extensibility.

  • Normalization and correct handling of optional data.
  • Generalization of entities when designing the schema.
  • Data-driven designs that not only model the data but also enable the organization to store the behavioral patterns or flow which can be hooked up in different stages of information processing.
  • A well-defined abstraction layer that decouples the database from all client access, including client apps, middle tiers, ETL, and reports.
  • Extensibility is also closely related to simplicity. Complexity breeds complexity. A simple solution is easy to understand and adopt, and ultimately, easy to adjust later.

 

Data Integrity

Now at this point we understand that information is very much important for any organization. Based on the historic information, every organization makes different strategies, decisions for growth. One small mistake in data can lead to major issues with any organization’s key decision and hence a big risk for growth.

When we are designing a good information system then we must keep in mind about integrity, correctness of data. Our system should be smart enough to handle incorrect, missing data attributes and based on that it should either take corrective actions or straightaway reject the data. Incorrect data should not be present in system or at least should not exposed to individuals creating misunderstanding.

Data integrity can be of many types.

Entity Integrity

Involves the structure (primary key and its attributes) of the entity. If the primary key is unique and all attributes are scalar and fully dependent on the primary key, then the integrity of the entity is good. In the physical schema, the table’s primary key enforces entity integrity.

Domain Integrity

It defines that data should be of correct type and we should handle optional data in correct way. We should apply Nullability to those attributes which are optional for organization. We can define proper data types for different attributes based on organization’s requirement so that correct format data should present in system.

Referential Integrity

This defines if any entity is dependent on another one then parent entity should be there in the system and should be uniquely identifiable. We can do this by implementing foreign keys.

Transactional Integrity

This defines that transaction should have its ACID properties. Any transaction should be atomic, consistent, durable and isolated. The quality of a database product is measured by its transactions’ adherence to the ACID properties:

Atomic — all or nothing

Consistent — the database begins and ends the transaction in a consistent state

Isolated — one transaction does not affect another transaction

Durable — once committed always committed 

User defined integrity

There are few business rules which we cannot validate just by primary keys, foreign keys etc. There has to be some mechanism so that we can validate complex rules for integrity. We can implement these rules in following ways:

  • Check Constraints
  • Triggers & Stored Procedures
  • Queries to identify incorrect data and handle in correct way.

 

Performance

As we know that information should be readily available as requested. Performance of the system should be up to the mark. As data in increasing day by day so at some time there will be impact on performance if database design is poor or we’ll not take any actions to improve performance.

Following could be few strategies which we can implement when there is need as data increases.

  • A well-designed schema with normalization and generalization
  • A sound indexing strategy, including careful selection of clustered and nonclustered
  • Tight, fast transactions that reduce locking and blocking
  • Partitioning, which is useful for advanced scalability

 

Availability

The availability of information refers to the information’s accessibility when required regarding uptime, locations, and the availability of the data for future analysis. Disaster recovery, redundancy, archiving, and network delivery all affect availability.

Following are the factors which impact availability:

  • Quality, redundant hardware
  • SQL Server’s high-availability features
  • Proper DBA procedures regarding data backup and backup storage
  • Disaster recovery planning

Security

For any organizational asset, the level of security must be secured depending on its value and sensitivity. Sometime organizations has suffered a lot because of data leaks which results in loss of faith and tends to business risk. So security is one of the most important aspect of good database design.

We can enhance Security by the following:

  • Physical security and restricted access of the data center
  • Defensively coding against SQL injection
  • Appropriate operating system security
  • Reducing the surface area of SQL Server to only those services and features required
  • Identifying and documenting ownership of the data
  • Granting access according to the principle of least privilege, which is the concept that users should have only the minimum access rights required to perform necessary functions within the database
  • Cryptography — data encryption of live databases, backups, and data warehouses
  • Metadata and data audit trails documenting the source and veracity of the data, including updates

Based on above principles, one should start designing databases and architectures.