Perigon at AAPG 2017

February 24th 2017

Perigon are excited to announce that we will exhibiting at AAPG ACE 2017. The conference will be hosted from the 2nd-5th of April 2017 at the George R. Brown Convention Center in Houston.

Come visit us at booth 2441 for a chat and a demo about iPoint.

For more information please email Chris Hanton.

Perigon will be in attendance at the 2016 Society of Core Anaylsts Symposium in Snowmass, Colorado - the 22nd-25th of August.

We hope to see some of there! For more information or to arrange a demonstration with one of our experts please contact Chris Hanton

Perigon and EasyCopy are pleased to announce that iPoint and EasyCore are now fully integrated. Starting with iPoint 2016.1 and EasyCore 1.2.6, information may be transferred between the two packages allowing faster and more powerful workflows to the benefit of our users.

EasyCore projects can now be stored in the iPoint datastore and opened directly from iPoint. Such capability enables the full power of iPoints’ multiscalar visualization to be utilized, allowing for complete integration of EasyCore datasets with other sources of wellbore data.

“iPoint and EasyCore are two complimentary solutions that together offer a lot of advantages to our clients, both existing and new.  Combined we will even better handle the workflow of capturing core data and visualization.” said EasyCopy CEO Christian Drejoe.

Perigon CEO Paul Reid added “Through the integration of EasyCore and iPoint we now offer a unique and easy way to capture and digitize core descriptions whilst providing access to that data more efficiently.  We are excited to partner with EasyCopy as we continue to advance our portfolio of core workflow offerings for the benefit of our clients.”

Perigon is looking forward to continuing our work with EasyCopy and the opportunities this alliance will create for our clients.  For more information about EasyCopy and their offerings please visit: http://www.myeasycopy.com.

Perigon are pleased to announce that iPoint 2016.1 is now available for download. In these challenging times we understand the desire for increased efficiency and value from investments. Therefore this release has had a strong focus on enhancing and accomplishing more with your legacy data.

iPoint Publisher

With iPoint Publisher users can instantaneously create an iPoint Viewer project;  a standalone read-only version of an iPoint project that facilitates the flexible export and transfer of data. iPoint Viewers can be delivered to partners, clients and colleagues anywhere in the world, enabling all the visualization and data mining advantages of iPoint without the requirement for a standard iPoint license.

Drag & Drop Export

iPoint export options are now even more convenient and flexible. Datasets can be dragged from iPoint and dropped to a file location in a range of commonly used file formats. Furthermore, datasets can also be dragged directly into open applications, such as excel and log interpretation packages like Interactive Petrophysics and Techlog.  

Search & Report

Enhancements to the Search & Report module allows for more powerful searching alongside an improved user interface. ‘Query Groups’ allow for the creation of more complex searches whilst users may now seamlessly switch from reporting on curves, datasets or wells. Additionally, ’Favourites’ or common searches can be saved for even faster searching when you need it.

Processing Point

Control over data exporting has been taken to a new level. Exported curves can be merged either manually or by their Curve Alias Table. Run unit conversions automatically as part of any data export routine, and now saved ‘Search and Report’ templates can be used in Processing Point to quickly isolate data for exporting.

iPointWeb

iPointWeb allows users to access data in an iPoint system from anywhere in the world. The latest release builds on this functionality to allow Well Header information, datasets, images and documents to be opened, viewed and exported. Data found in iPointWeb can be sent directly to the desktop version of iPoint for further workflows.

iPoint version 2016.1 is now available for download. Please contact your local iPoint representative for more information.

Perigon at PNEC 2016

March 25th 2016

Perigon are pleased to announce they will be exhibiting at PNEC in Houston from the 17th to 19th of May. We'll be at Booth 405 and hope to see you there!
 

If you would like to arrange a personalized demonstration or a chat with one of our experts please contact Chris Hanton.

Management of existing legacy data is a vital part of any subsurface workflow. Prior to interpretation data must be collated, standardized, QC’d on varying levels and presented in a form which is easily accessible.

In-times of low oil price this data becomes even more important as reductions in new data-acquisition put even more emphasis on legacy data to be used to identify new leads and missed opportunities.

Due to its nature, legacy data is liable to have been subject to previous processing and interpretation. However, the workflows used and the level of this processing maybe unclear, thus reducing confidence in the quality and usability of such data. Any management system should aim to provide as much information about the source and proximity of the data as possible.

Goals of Data Management

Data Management in its simplest form can be defined as “Controlling the information generated during a project” (Penn State Uni). A more practical definition would be the controlling of information utilised and generated in a project in an effective manner to improve the decision making process.

The ultimate purpose of any subsurface project is to provide information which can be used to inform decisions made as part of a business strategy. To increase confidence in decisions made, the models that produce the information are required to have reduced uncertainty levels. The process of reducing uncertainty begins with the initial data that is interpreted. Therefore any data that is used throughout the model building process should have high confidence levels and low uncertainty attributed to it – this is a key requirement of the data management process.

The steps taken to ensure data is ready for interpretation is frequently referred to as the path-to-data-readiness. Minimizing the length of this path (whilst not sacrificing the end result) is a key challenge for data managers. The subsurface interpretation is where value to data is added – enabling key decisions about field development and business strategy to be created. Less time preparing data frees up more time for interpretation.

Data Management Strategy

How such goals are achieved varies dependant on the size of the company, internal infrastructure and the types of data stored and as such no one-solution-fits-all approach to data management exists. That said, an effective database solution must fulfil the following criteria:

1. Organized – the key to any data-store is that it is organised. Data must be easy to find within the database and follow a logical transition. Any processes applied to the data should be recorded.
2. Standardized – where possible data should be stored in a way which follows company standards be this naming conventions, location, units etc. Standardization is key to allowing data to be accurately integrated into company workflows.
3. Connected and Timely – Those who use data need to be able to access the appropriate information quickly to speed up the workflows. Ideally, any database should be a live-linked which enables updates to data to be uploaded efficiently and allow all appropriate users to access it.
4. Repeatedly Interrogable – In addition to locating data quickly, users must have ability to ‘dive-deeper’ into the data. This includes finding where data is stored against depth or spatially, and the ability to cross-reference multiple types of data for QC purposes. Such queries should be reflective of the most recent data available.
5. Intuitive – the front end of any database should be easy to navigate and act as a way to speed-up the interpretation process. Suitable data should be able to be clearly identified through basic query searches.

The above guidelines allow data to be utilized quickly and repeatedly in value adding workflows whilst ensuring that confidence in data levels is maintained. The above guidelines should also be followed to ensure that new data, and data produced as a result of existing workflows is easily integrated into the existing system.

Creation of such a system can be resource intensive but the far-reaching benefits in-terms of time saving and data accuracy cannot be underestimated.

References

https://www.libraries.psu.edu/psul/pubcur/what_is_dm.html - Penn State University Libraries (retrieved 29th June 2015)