IT World Canada Revisit Web Strategy to Respond to Web Trends

 

Yogi Schulz


Published: February 17th, 2015

 

If your Agile software development is delivering less functionality or producing more defects in each iteration, then your Agile project team probably believes that data modelling is really dead or irrelevant.

Viewing data modelling as dead or irrelevant arises from these trends that affect data modelling:

  1. The rise of NoSQL datastores that offer flexibility using schema-less key-value pairs.
  2. Wider acceptance of analytic databases that leverage key-value pairs and in-memory technologies.
  3. The Agile focus on end-user stories that are business process descriptions as the means for discovering and elaborating requirements.

Let’s examine if data modelling is really dead or adds value to the software development process.

Data modelling is dead

In many Agile software development environments, it’s the end-user stories that are all-important. The underlying data model is irrelevant. It’s assumed to just take care of itself with little effort.

The end-user stories describe the desired system in business process terms. This occurs because the business staff thinks using a process mindset and can therefore only describe their requirements using this process context. Very few people think using a data structure mindset.

While end-user stories are obviously valuable and essential, a business process mindset focuses development on successive process steps with little thought to the underlying data structures and how they critically support the business process.

The developers lay out the data identified by the end-user stories on screens and write it to the database using a surrogate key. Adding more tables and columns to the database definition is given little thought and requires little effort in each development iteration.

As more and more end-user stories are developed, the need to recognize more foreign keys and correct or improve previous incomplete understandings of primary keys and various attribute columns occurs. In each iteration, the database definition grows considerably without much thought or effort.

However, this restructuring or refactoring of the database definition requires modification and retesting of the modules that are already in production. That’s a serious amount of effort that slows progress and produces more defects.

Data modelling is important

Can software development be accelerated and materially reduce refactoring of in-production modules by giving data modelling a more prominent position in the Agile software development environment?

Some software developers clearly think so. They describe a development process in which work on the functionality of the system progresses, based on the business process model, alternates with work on the database schema based on the data model. In this Agile model, work flip-flops back and forth in each iteration.

Scott Ambler has described an evolutionary approach to data modeling that aligns the data modeling work of data professionals better with the Agile software development process of the software developers.

Some Agile software developers use techniques that allow a database design to evolve as the application develops through the iterations. The techniques rely on applying continuous integration and automated refactoring to database development, together with a close collaboration between data professionals and software developers.

If you think I misunderstand the issues or I’m overstating the value of data modelling, please post a comment.

If you want to read the articles in a bibliography on this subject that I’ve created, please send me an email.

Read more: http://www.itworldcanada.com/blog/is-data-modelling-really-dead/102067#ixzz3ViiLgvD5
or visit http://www.itworldcanada.com for more Canadian IT News