Archive for the ‘Digital Oilfield’ category

Making new IT work for the business

September 23, 2011

I found an EXCELLENT article in the Digital Energy Journal by Dutch Holland. In this article he explore different strategies for transforming operational requirements into successful initiatives.

Without stealing too much of his well articulated article, the five approaches normally used are:

  • The by-the-book business analyst
  • The business-experienced analyst”
  • The businessman CIO
  • The IT expert Inside the business
  • The operations-led interface

I encourage anyone attempting to implement a operations-centric technological solution to read his article.

http://www.findingpetroleum.com/n/Making_new_IT_work_for_the_business/d1a1861b.aspx

“When trying to connect technology innovation with business, an intelligent interface between the two is required. It must be able to translate business opportunity into technical requirements; innovate, test and evaluate; and seamlessly implement new technology into the business.” ~Dutch Holland

Advertisements

Predictive Analytics

September 9, 2011

Predictive analytics is used in actuarial science, financial services, insurance, telecommunications, retail, travel, healthcare, pharmaceuticals and other fields (Wikipedia). But operations – manufacturing, processing, etc., have been a little slower to encompass the concept. A drilling engineer friend of mine says “just put my hand on the brake lever and I’ll drill that well”. He probably can, but few of the rest of us can, or want to.

We want to see operating parameters, performance metrics, and process trends. All this because we want to have the necessary information and knowledge to assimilate understanding and invoke our skill set (wisdom). In this scenario, we are responding to stimulus, we are applying “reactive analytics”. But systems get more complex, operations becomes more intertwined, performance expectations become razor-thin. And with this complexity grows demand for better assistance from technology. In this case, the software performs the integrated analysis and the results is “predictive analytics”. And with predictive analytics comes the close cousin: decision models and decision trees.

Sullivan McIntyre, in his article From Reactive to Predictive Analytics, makes an observation about predictive analytics in social media that is mirrored in operations:

There are three key criteria for making social data useful for making predictive inferences:

  • Is it real-time? (Or as close to real-time as possible)
  • Is it metadata rich?
  • Is it integrated?

Having established these criteria, the nature of the real-time data and the migration of historical data into real-time, predictive analytics becomes achievable.

The Big Crew Change

May 17, 2011

“The Big Crew Change” is an approaching event within the oil and gas industry when the mantle of leadership will move from the “calculators and memos” generation to the “connected and Skype” generation. In a blog 4 years ago, Rembrandt observes:

“The retirement of the workforce in the industry is normally referred to as “the big crew change”. People in this sector normally retire at the age of 55. Since the average age of an employee working at a major oil company or service company is 46 to 49 years old, there will be a huge change in personnel in the coming ten years, hence the “big crew change”. This age distribution is a result of the oil crises in ‘70s and ‘80s as shown in chart 1 & 2 below. The rising oil price led to a significant increase in the inflow of petroleum geology students which waned as prices decreased.”

Furthermore, a Society of Petroleum Engineers study found:

“There are insufficient personnel or ‘mid-carrers’ between 30 and 45 with the experience to make autonomous decisions on critical projects across the key areas of our business: exploration, development and production. This fact slows the potential for a safe increase in production considerably”

A study undertaken by Texas Tech University make several points about the state of education and the employability of graduates during this crew change:

  • Employment levels at historic lows
  • 50% of current workers will retire in 6 years
  • Job prospects: ~100% placement for the past 12 years
  • Salaries: Highest major in engineering for new hires

The big challenge: Knowledge Harvesting. “The loss of experienced personnel combined with the influx of young employees is creating unprecedented knowledge retention and transfer problems that threaten companies’ capabilities for operational excellence, growth, and innovation.” (Case Study: Knowledge Harvesting During the Big Crew Change).

In a blog by Otto Plowman, “Retaining knowledge through the Big Crew Change”, we see that

“Finding a way to capture the knowledge of experienced employees is critical, to prevent “terminal leakage” of insight into decisions about operational processes, best practices, and so on. Using of optimization technology is one way that producers can capture and apply this knowledge.When the retiring workforce fail to convey the important (critical) lessons learned, the gap is filled by data warehouses, knowledge systems, adaptive intelligence, and innovation.”

When the retiring workforce fail to convey the important (critical) lessons learned, the gap is filled by data warehouses, knowledge systems, adaptive intelligence, and innovation. Perhaps the biggest challenge is innovation. Innovation will drive the industry through the next several years. Proactive intelligence, coupled with terabyte upon terabyte of data will form the basis.

The future: the nerds will take over from the wildcatter.

Multi-Nodal, Multi-Variable, Spatio-Temporal Datasets

April 21, 2011

Multi-Nodal, Multi-Variable, Spatio-Temporal Datasets are large-scale datasets encountered in real-world data-intensive environments.

Example Dataset #1

A basic example would be the heat distribution within a chimney at a factory. Heat sensors are distributed throughout the chimney and readings are taken are periodic intervals. Since the laws of Thermodynamics within a chimney are well understood, the interaction between the monitoring devices can be modeled. Predictive analysis could, conceivably be performed on the dataset and chimney cracks could be detected, or even predicted, in real-time.

In this scenario, data points consist of 1) multiple sensors or data acquisition devices, 2) multiple spatial locations, 3) temporally separated samples. When a sensor fails, it is simply removed from the processing and kept out of the processing until the sensor is repaired (during plant maintenance).

Example Dataset #2

An example would be the interconnected river and lake levels within a single geographic area. Distinct monitoring points are located at specific geo-spatial locations; geo-spatial points with interconnected transfer functions and models. Each of the monitoring points consist of multiple data acquisitions, and each data acquisition is sampled at random (or predetermined) intervals.

As a result, data points consist of 1) multiple sensors, 2) multiple spatial locations, and 3) temporally separated samples. In this scenario, sensors may fail – or become temporarily offline in a random, unpredictable manner. Sensors must be taken out of the processing until data validity returns. Due to the interconnectedness of the sensor locations, and the interrelationships between the sensors, sufficient redundant data could be present to permit suitable analytical processing in the absence of data.

Example Dataset #3

The most complex example could be aerial chemical contamination sampling. In this scenario, the chemical distribution is continuously changing at the result of understood, but not fully predictable, weather behavior. Sampling devices would consist of 1) airborne sampling devices (balloons) providing specific, limited sample sets, 2) ground based mobile sampling units (trucks) providing extensive sample sets, and fixed based (pole mounted) sampling units whose data is downloaded in relatively long intervals (hours or days).

In this scenario, multiple, non-uniform data sampling elements are positioned in non-uniformly (and mobile) located positions, with data collection performed in fully asynchronous fashion. This data cannot be stored in flat-table structures and it must provide enough relevant information to fill-in the gaps in data.

Information Theory and Information Flow

January 30, 2011

(originally posted on blogspot January 28, 2010)

Information is the core, the root, of any business. But exactly what is information? Many will immediately begin explaining computer databases. But only a small portion of information theory is actually computer databases.

Information is a concrete substance in that it is a quantity that is sought, it is a quantity that can be sold, and it is a quantity that is protected.

Wikipedia’s definition: “Information is any kind of event that affects the state of a dynamical system. In its most restricted technical sense, it is an ordered sequence of symbols. As a concept, however, information has many meanings. Moreover, the concept of information is closely related to notions of constraint, communication, control, data, form, instruction, knowledge, meaning, mental stimulus, pattern, perception, and representation.” (http://en.wikipedia.org/wiki/Information)

Information Theory then is not the study of bits and bytes. It is the study of information. Moreover, the quantification of the information. And fundamental to Information Theory is the acquisition of information, along with the extraction of the true information from the extraneous. In Electrical Engineering, this process is addressed by signal conditioning and noise filtering. In the mathematical sciences (and specifically the probability sciences), the acquisition of information is the investigation into the probability of events and the correlation of events – both as simultaneous events and as cause-effect events. Process control looks to the acquisition of information to lead to more optimum control of the processes.

So the acquisition of a clear signal, the predictive nature of that information, and the utilization of that information is at the root of information theory.

C. E. Shannon published a paper in 1948 Mathematical Theory of Communication whicherved to introduce the concept of Information Theory to modern science. His tenet is that communicatioin systems (the means for dispersal of information) are composed of five parts:

  1. An information source (radio signal, DNA, industrial meter)
  2. A transmitter
  3. The channel (medium used to transmit)
  4. A receiver
  5. A recipient.

Since the information flow must be as distraction and noise free as possible, digital systems are often employed for industrial and parameterized data. Considerations then focus on data precision, latency, clarity, and storage.

Interestingly, the science of cryptography actually looks for obvuscation. Data purity, but hidden. Withing cryptograsphy, the need for precise, timely, and clear information is as important as ever, but the encapsulation of that information into shucks of meaningless dribble is the objective.

But then the scientist (as well as the code breaker) is attempting to achieve just the opposite: finding patterns, tendencies, and clues. These patterns, tendencies, and clues are the substance of the third pahse of the Data –> Information –> Knowledge –> Understanding –> Wisdom. And finding these patterns, tendencies, and clues is what provides the industrial information user his ability to improve performance and, as a result, profitability.

The Digital Oilfield is a prime example of the search for more and better information. As the product becomes harder to recover – shale gas, undersea petroleum, horizontal drilling, etc. – the importance of the ability to mine patterns, tendencies, and clues is magnified.

Hence the Digital Oilfield is both lagging in the awakening to the need for information and leading in the resources to uncap the information.

The Digital Oilfield, Part 2

January 30, 2011

(originally posted on blogspot January 18, 2010)

“The improved operational performance promised by a seamless digital oil field is alluring, and the tasks required to arrive at a realistic implementation are more specialized than might be expected.” (http://www.epmag.com/archives/digitalOilField/1936.htm)

Seamless integrated operations requires a systematic view of the entire exploration process. But the drilling operation may be the largest generator of diverse operational and performance data and may produce more downstream data information than any other process. Additionally, the drilling process is one of the most legally exposing processes performed in energy production – BP’s recent Gulf disaster is an excellent example.

The seamless, integrated, digital oilfield is data-centric. Data is at the start for the process, and data is at the end of the process. But data is not the objective. In fact, data is an impediment to information and knowledge. But data is the base of the information and knowledge tree – data begets information, information begets knowledge, knowledge begets wisdom. Bringing data up to the next level (information) or the subsequent level (knowledge) requires a systematic and root knowledge of the data available withing an organization, the data which should be available within an organization, and the meaning of that data.

Data mining is the overarching term used in many circles to define the process of developing information and knowledge. In particular, data mining is taking the data to the level of knowledge. Converting data to information is often no more complex that producing a pie chart or an x-y scatter chart. But that information requires extensive operational experience to analyze and understand. Data mining takes data into knowledge tier. Data mining will extract the tendencies of operational metrics to fortell an outcome.

Fortunately, there are several bright and shinning examples of entrepreneurs developing the data-to-knowledge conversion. One bright and promising star is Verdande’s DrillEdge product (http://www.verdandetechnology.com/products-a-services/drilledge.html). Although this blog does not support or advocate this technology as a matter of policy, this technology does illustrate an example of forward thinking and systematic data-to-knowledge development.

A second example is PetroLink’s modular data acquisition and processing model (http://www.petrolink.com). This product utilizes modular vendor-agnostic data accumulation tools (in particular interfacing to Pason and MD-TOTCO), modular data repositories, modular equation processing, and modular displays. All of this is accomplished through the WITSML standards (http://en.wikipedia.org/wiki/WITSML).

Future blogs will consider the movement of data, latency, reliability, and synchronization.

The Digital Oilfield, Part 1

January 30, 2011

(originally posted on blogspot January 17, 2010)

The oil business (or bidniz as the old-hands call it) has evolved in drilling, containment, control, and distribution. But the top-level system view has gone largely ignored. Certainly there are pockets of progress. And certainly there are several quality companies producing centralized data solutions. But even these solutions focus on the acquisition of the data while ignoring the reason for the data.

“Simply put Digital Energy or Digital Oilfields are about focusing information technology on the objectives of the petroleum business.” (www.istore.com, January 17, 2011)

Steve Hinchman, Marathon’s Senior VP of World Wide Production, in a speech to 2006 Digital Oil Conference says “Quality, timely information leads to better decisions and productivity gains.” and “Better decisions lead to better results, greater credibility, more opportunities, greater shareholder value.”

“Petroleum information technology (IT), digitized real-time downhole data and computer–aided practices are exploding, giving new impetus to the industry. The frustrations and hesitancy common in the 1990s are giving way to practical solutions and more widespread use by the oil industry. Better, cheaper and more secure data transmission through the Internet is one reason why.” (The Digital Oilfield, Oil and Gas Investor, 2004)

Future Digital Oilfield development will include efforts to integrate drilling data into its engineering and decision making. This integration consists of:

  1. Developing and integrating the acquisistion of data from all phases of the drilling operation. The currently dis-joint data will be brought together (historical and future) into a master data store architecture consisting of a Professional Petroleum Data Model (www.ppdm.org), various legacy commercial systems, and various internal custom data stores.
  2. Developing a systematic real-time data approach including data processing, analysis, proactive actioning, and integrated presentations. Such proactive, real-time processing includes collision avoidance, pay-zone tracking and analysis, and rig performance. Included is a new technology we a pushing for analytical analysis and recommendations for the best rig configuration and performance.
  3. Developing a systematic post-drill data analysis and centralized data recall for field analysis, offset well comparison, and new well engineering decisions. Central to this effort will include data analysis, data mining, and systematic data-centric decision making.

Watch the Digital Oilfield over the next few months as the requirements for better control, better prediction, and better decision making take a more significant center-stage.


%d bloggers like this: