data driven O&G intelligence

Pyramid Analytics: Main lessons learned from the data-driven drilling and production conference

It was great to be at the data-driven drilling and production conference in Houston on June 11 and 12. The conference was well attended by hundreds of oil and gas (O&G) professionals looking to use technology to minimize downtime, enhance safety, and deliver digital transformation throughout their businesses.

We talked to dozens of attendees looking to educate themselves about modern data collection and ingestion methods, better information management and integration processes, E&P automation & control systems, more efficient change management, and drilling optimization techniques, and advanced and predictive analytics.

As an analytics and BI vendor, we were there to learn more about how practitioners are using advanced analytics, particularly AI and machine learning, to extract more value out of their data.

Three key themes

In our conversations with attendees and other vendors, three key themes emerged:

  • The persistence of data silos

    No surprise here: data silos aren’t going anywhere. The upstream organizations we spoke to struggle with data sharing across departments. It’s a common scenario for users to have limited access to distributed data. It is also common for upstream organizations to perform analytics using numerous tools (many of the individuals we spoke to freely admitted to using three or four different BI tools). This perpetuates the cliché: there is no single version of the truth. The result is duplicate data, duplicate efforts for reporting, duplicate logic and business rules, and more. As a result, collaboration and efficiency suffer.
  • AI and ML operationalization remain elusive

    Many of the professionals we talked to lack effective systems for putting advanced analytics into production. Here’s a common scenario. A line-of-business user will throw data scientists a set of data and say, 'here’s the data, do your magic'. The data isn’t always optimized, so data scientists often spend time prepping the data before they can even analyze it. Then they analyze the data using standalone ML software applications before outputting a flat file and sending it to a business analyst to reload into one of several desktop-based BI applications. This results in a perpetual cycle of extracting, importing, analyzing, exporting, re-importing, and re-analyzing data. The whole process is cumbersome and inefficient; meaningful insights derived from AI and ML initiatives remain limited.

  • It’s hard to move beyond legacy analytics systems 

    For many O&G companies, there is a strong desire to adopt new data and analytics technologies; they acknowledge legacy tools simply aren’t equipped to quickly accommodate newer sources of data and perform advanced and prescriptive analytics. However, the difficulty of migrating from legacy systems often holds some people back, no matter how siloed their data environment is. Many organizations have had their current desktop-based analytics solutions in place for years, and in some cases decades. However, the huge store of analytic models, dashboards, and reports they have created over the years cannot be easily migrated or re-created. 

The three challenges identified above are tough. But that doesn’t make trying to solve them any less urgent. And from our perspective, this doesn’t make them any less solvable. The price of inaction is too high. No one can stand on the sidelines while the technology environment changes.

Author: Brigette Casillas

Source: Pyramid Analytics