Management of existing legacy data is a vital part of any subsurface workflow. Prior to interpretation data must be collated, standardized, QC’d on varying levels and presented in a form which is easily accessible.

In-times of low oil price this data becomes even more important as reductions in new data-acquisition put even more emphasis on legacy data to be used to identify new leads and missed opportunities.

Due to its nature, legacy data is liable to have been subject to previous processing and interpretation. However, the workflows used and the level of this processing maybe unclear, thus reducing confidence in the quality and usability of such data. Any management system should aim to provide as much information about the source and proximity of the data as possible.

Goals of Data Management

Data Management in its simplest form can be defined as “Controlling the information generated during a project” (Penn State Uni). A more practical definition would be the controlling of information utilised and generated in a project in an effective manner to improve the decision making process.

The ultimate purpose of any subsurface project is to provide information which can be used to inform decisions made as part of a business strategy. To increase confidence in decisions made, the models that produce the information are required to have reduced uncertainty levels. The process of reducing uncertainty begins with the initial data that is interpreted. Therefore any data that is used throughout the model building process should have high confidence levels and low uncertainty attributed to it – this is a key requirement of the data management process.

The steps taken to ensure data is ready for interpretation is frequently referred to as the path-to-data-readiness. Minimizing the length of this path (whilst not sacrificing the end result) is a key challenge for data managers. The subsurface interpretation is where value to data is added – enabling key decisions about field development and business strategy to be created. Less time preparing data frees up more time for interpretation.

Data Management Strategy

How such goals are achieved varies dependant on the size of the company, internal infrastructure and the types of data stored and as such no one-solution-fits-all approach to data management exists. That said, an effective database solution must fulfil the following criteria:

1. Organized – the key to any data-store is that it is organised. Data must be easy to find within the database and follow a logical transition. Any processes applied to the data should be recorded.
2. Standardized – where possible data should be stored in a way which follows company standards be this naming conventions, location, units etc. Standardization is key to allowing data to be accurately integrated into company workflows.
3. Connected and Timely – Those who use data need to be able to access the appropriate information quickly to speed up the workflows. Ideally, any database should be a live-linked which enables updates to data to be uploaded efficiently and allow all appropriate users to access it.
4. Repeatedly Interrogable – In addition to locating data quickly, users must have ability to ‘dive-deeper’ into the data. This includes finding where data is stored against depth or spatially, and the ability to cross-reference multiple types of data for QC purposes. Such queries should be reflective of the most recent data available.
5. Intuitive – the front end of any database should be easy to navigate and act as a way to speed-up the interpretation process. Suitable data should be able to be clearly identified through basic query searches.

The above guidelines allow data to be utilized quickly and repeatedly in value adding workflows whilst ensuring that confidence in data levels is maintained. The above guidelines should also be followed to ensure that new data, and data produced as a result of existing workflows is easily integrated into the existing system.

Creation of such a system can be resource intensive but the far-reaching benefits in-terms of time saving and data accuracy cannot be underestimated.

References - Penn State University Libraries (retrieved 29th June 2015)