Posted tagged ‘Data-centric Decisions’

Making new IT work for the business

September 23, 2011

I found an EXCELLENT article in the Digital Energy Journal by Dutch Holland. In this article he explore different strategies for transforming operational requirements into successful initiatives.

Without stealing too much of his well articulated article, the five approaches normally used are:

  • The by-the-book business analyst
  • The business-experienced analyst”
  • The businessman CIO
  • The IT expert Inside the business
  • The operations-led interface

I encourage anyone attempting to implement a operations-centric technological solution to read his article.

http://www.findingpetroleum.com/n/Making_new_IT_work_for_the_business/d1a1861b.aspx

“When trying to connect technology innovation with business, an intelligent interface between the two is required. It must be able to translate business opportunity into technical requirements; innovate, test and evaluate; and seamlessly implement new technology into the business.” ~Dutch Holland

Advertisements

Predictive Analytics

September 9, 2011

Predictive analytics is used in actuarial science, financial services, insurance, telecommunications, retail, travel, healthcare, pharmaceuticals and other fields (Wikipedia). But operations – manufacturing, processing, etc., have been a little slower to encompass the concept. A drilling engineer friend of mine says “just put my hand on the brake lever and I’ll drill that well”. He probably can, but few of the rest of us can, or want to.

We want to see operating parameters, performance metrics, and process trends. All this because we want to have the necessary information and knowledge to assimilate understanding and invoke our skill set (wisdom). In this scenario, we are responding to stimulus, we are applying “reactive analytics”. But systems get more complex, operations becomes more intertwined, performance expectations become razor-thin. And with this complexity grows demand for better assistance from technology. In this case, the software performs the integrated analysis and the results is “predictive analytics”. And with predictive analytics comes the close cousin: decision models and decision trees.

Sullivan McIntyre, in his article From Reactive to Predictive Analytics, makes an observation about predictive analytics in social media that is mirrored in operations:

There are three key criteria for making social data useful for making predictive inferences:

  • Is it real-time? (Or as close to real-time as possible)
  • Is it metadata rich?
  • Is it integrated?

Having established these criteria, the nature of the real-time data and the migration of historical data into real-time, predictive analytics becomes achievable.

What is Content?

September 8, 2011

Several internet articles and blogs address the meaning of content from an internet perspective. From this perspective, content is the (meaningful) stuff on a page, the presentation of information to the seeker.

But content within an operations-centric perspective is entirely different. And the databases and operational tools must be content data reflecting the desired information being sought in the pursuit of knowledge. Thus, paraphrasing Scottie Claiborne (http://www.successful-sites.com/articles/content-claiborne-content1.php), “content is the stuff in your operations system;  good content is useful information”.

Therefore, content is the meaningful data and the presentation of this data as information.

Content can, and should be, redundant. Not redundant from a back-up perspective; redundant from an information theory perspective – data that is inter-related and inter-correlated. (Data that is directly calculated need not be stored, however, the method of calculation may change and therefore the original calculation may prove useful.) Data that is inter-correlated may be thought of in terms of weather: wind speed, temperature, pressure, humidity, etc. are individual, measurable values but the inter-relate and perfectly valid inferences may be made in the absence of one or more of these datums. When the historical (temporal) and adjacent (geospatially) datums are brought into the content, then, according to information theory, more and more redundancy exists within the dataset.

Having identified the basis of content, the operations system designer should perform content analysis. Content analysis is both qualitative and quantitative. But careful attention to systems design and systems management will permit increased quantification of the results. What is content analysis in its most base form: the designer asking the questions “What is the purpose of the data? What outcomes are expected from the data? How will the data be imparted to produce the desired behavior?”

So how do we quantify the importance of specific data / content? How do we choose which data / content to retain? This question is so difficult to answer, the normal response is to save everything, forever. And since data not retained is data lost, and lost forever, this approach seems reasonable in a world of diminishing data storage costs. But, then, the cost and complexity of information retrieval becomes more difficult.

The concept and complexity of data retrieval is left for another day…

The Value of Real-Time Data

August 31, 2011

Real-Time data is a challenge to any process-oriented operation. But the functionality of the data is difficult to describe in such a way that team members not well versed in data management. Toward that end, four distinct phases of data have been identified:

  1. Real-Time: streaming data
    visualized and considered – system responds
  2. Forensic: captured data
    archived, condensed – system learns
  3. Data Mining: consolidated data
    hashed and clustered – system understands
  4. Predictive Analytics: patterned data
    compared and matched – system anticipates

A mnore detailed explanation of these phases may be:

Control and Supervision: Real-time data is used to provide direct HMI (human-machine-interface) and permit the human computer to monitor / control the operations from his console. The control and supervision phase of real-time data does not, as part of its function, record the data. (However, certain data logs may be created for legal or application development purposes.) Machine control and control feedback loops require, as a minimum, real-time data of sufficient quality to provide steady operational control.

Forensic Analysis and Lessons Learned: Captured data (and, to a lesser extent, data and event logs) are utilized to investigate specific performance metrics and operations issues. Generally, this data is kept in some form for posterity, but it may be filtered, processed, or purged. Nevertheless, the forensic utilization does represent post-operational analytics. Forensic analysis is also critical to prepare an operator for an upcoming similar process – similar in function, geography, or sequence.

Data Mining: Data mining is used to research previous operational events to locate trends, areas for improvement, and prepare for upcoming operations. Data mining is used identify a bottleneck or problem area as well as correlate events that are less than obvious.

Proactive / Predictive Analytics: The utilization of data streams, both present and previous, in an effort to predict the immediate (or distant) future requires historical data, data mining, and the application of learned correlations. Data mining may provide correlated events and properties, but the predictive analytics will provide the conversion of the correlations into positive, immediate performance and operational changes. (This utilization is not, explicitly AI, artificial intelligence, but the two are closely related)

The Big Crew Change

May 17, 2011

“The Big Crew Change” is an approaching event within the oil and gas industry when the mantle of leadership will move from the “calculators and memos” generation to the “connected and Skype” generation. In a blog 4 years ago, Rembrandt observes:

“The retirement of the workforce in the industry is normally referred to as “the big crew change”. People in this sector normally retire at the age of 55. Since the average age of an employee working at a major oil company or service company is 46 to 49 years old, there will be a huge change in personnel in the coming ten years, hence the “big crew change”. This age distribution is a result of the oil crises in ‘70s and ‘80s as shown in chart 1 & 2 below. The rising oil price led to a significant increase in the inflow of petroleum geology students which waned as prices decreased.”

Furthermore, a Society of Petroleum Engineers study found:

“There are insufficient personnel or ‘mid-carrers’ between 30 and 45 with the experience to make autonomous decisions on critical projects across the key areas of our business: exploration, development and production. This fact slows the potential for a safe increase in production considerably”

A study undertaken by Texas Tech University make several points about the state of education and the employability of graduates during this crew change:

  • Employment levels at historic lows
  • 50% of current workers will retire in 6 years
  • Job prospects: ~100% placement for the past 12 years
  • Salaries: Highest major in engineering for new hires

The big challenge: Knowledge Harvesting. “The loss of experienced personnel combined with the influx of young employees is creating unprecedented knowledge retention and transfer problems that threaten companies’ capabilities for operational excellence, growth, and innovation.” (Case Study: Knowledge Harvesting During the Big Crew Change).

In a blog by Otto Plowman, “Retaining knowledge through the Big Crew Change”, we see that

“Finding a way to capture the knowledge of experienced employees is critical, to prevent “terminal leakage” of insight into decisions about operational processes, best practices, and so on. Using of optimization technology is one way that producers can capture and apply this knowledge.When the retiring workforce fail to convey the important (critical) lessons learned, the gap is filled by data warehouses, knowledge systems, adaptive intelligence, and innovation.”

When the retiring workforce fail to convey the important (critical) lessons learned, the gap is filled by data warehouses, knowledge systems, adaptive intelligence, and innovation. Perhaps the biggest challenge is innovation. Innovation will drive the industry through the next several years. Proactive intelligence, coupled with terabyte upon terabyte of data will form the basis.

The future: the nerds will take over from the wildcatter.

Knowledge as an Asset

February 23, 2011

This is the information age. So we are deluged with information (and sometimes just data). Organizations are only now beginning to grasp the fundamental concept of Knowledge as an Asset.

What constitutes the knowledge asset?

“Unlike information, knowledge is less tangible and depends on human cognition and awareness. There are several types of knowledge – ‘knowing’ a fact is little different from ‘information’, but ‘knowing’ a skill, or ‘knowing’ that something might affect market conditions is something, that despite attempts of knowledge engineers to codify such knowledge, has an important human dimension. It is some combination of context sensing, personal memory and cognitive processes. Measuring the knowledge asset, therefore, means putting a value on people, both as individuals and more importantly on their collective capability, and other factors such as the embedded intelligence in an organisation’s computer systems.”  (http://www.skyrme.com/insights/11kasset.htm)

Tackling the concept of knowledge is challenging. Knowledge Engineering was first conceptualized in the early 1980s (http://en.wikipedia.org/wiki/Knowledge_engineering) – coincident with the advent of the inexpensive personal computer. Since that time, the paradigm has shifted for all aspects of business. Seat-of-the-pants management is no longer suitable. And the requisite empowering of employees resulting from this transformation has completely changed the workplace.

Having said that, some of the most important issues in knowledge acquisition are as follows: (http://epistemics.co.uk/Notes/63-0-0.htm)

  • Most knowledge is in the heads of experts
  • Experts have vast amounts of knowledge
  • Experts have a lot of tacit knowledge
    • They don’t know all that they know and use
    • Tacit knowledge is hard (impossible) to describe
  • Experts are very busy and valuable people
  • Each expert doesn’t know everything
  • Knowledge has a “shelf life”

Moving knowledge within the organization is perhaps the most challenging aspect of corporate knowledge. Knowledge handoff must occur laterally and temporally. Laterally in that the knowledge must be shared in order to convey the necessary understanding that is required; When a critical mass of users gain the understanding, then the corporate wisdom will result. Temporally in that knowledge must be handed off to the shift change (this is true for 24/7 operations as well as global operations).

But the ability to use technology to share the results of technology is lagging. Frequently, the hand-off notes are the only tool at anyones disposal – very inefficient. Recently, Sharepoint sites and Wikis have become popular for inter-departmental information and knowledge sharing.

Jerome J. Peloquin wrote an interesting essay (“Knowledge as a Corporate Asset”) in which he opens with “Virtually every business in the world faces the same fundamental problem: Maintenance
of their competitive edge through the application and formation of knowledge.” He make two statements in his conclusion which serve well to wrap-up the premise of this essay (Knowledge as an Asset):

  • Information is useless unless we can act upon it, and that implies that it must first be transformed
    into knowledge.
  • The knowledge asset combines a number of factors which can be objectively proven by
    the observation and accomplishment of a specific set of criteria.

Enter Knowledge Information Management. Gene Bellinger writes that

“In an organizational context, data represents facts or values of results, and relations between data and other relations have the capacity to represent information. Patterns of relations of data and information and other patterns have the capacity to represent knowledge. For the representation to be of any utility it must be understood, and when understood the representation is information or knowledge to the one that understands. Yet, what is the real value of information and knowledge, and what does it mean to manage it?”

If Knowledge is an Asset, then, like any corporate asset, it must be managed, secured, maintained, and made available as a tool to the employees. If information is the core of business, then the resulting knowledge is the value of the business. And the wisdom (coming from the understanding) is the driving force and the profitability of the business. Smaller, faster, cheaper may be the mantra; knowledge is the asset.

The Data-Information Hierarcy, Part 2

February 11, 2011

Data, as has been established is the organic, elemental source quantities. Data, by itself, does not produce cognitive information and decision-making ability. But without it, the chain Data –> Information –> Knowledge –> Understanding –> Wisdom is broken before it starts. Data is recognized for its discrete characteristics.

Information is the logic grouping and presentation of the data. Information, in a more general sense, is the structure and encoded explanation of phenomena. It answers the who, what, when, where questions (http://www.systems-thinking.org/dikw/dikw.htm) but does not explain these answer, nor instill a wisdom necessary to act on the information. “Information is sometimes associated with the idea of knowledge through its popular use rather than with uncertainty and the resolution of uncertainty.” (An Introduction to Information Theory, John R. Pierce)

The leap from Information through Knowledge and Understanding into Wisdom is the objective sought be data / information analysis, data / information mining, and knowledge systems.

What knowledge is gleaned from the information; how can we understand the interactions (particularly at a system level); and how is this understanding used to create correct decisions and navigate a course to a desired end point.

Wisdom is the key. What good is the historical volumes of data unless informed decisions result? (a definition of Wisdom)

How do informed decisions (Wisdom) result if the data and process are not Understood?

How do we achieve understanding without the knowledge?

How do we achieve the Knowledge without the Information?

But in a very real sense, Information is not the who, what, when, where answers to the questions assimilating data. Quantified, Information is the measure of the order of the system; or conversely the measure of the lack of disorder, the measure of the entropy of the system.

Taken together, information is composed of the informational context, the informational content, and the informational propositions. Knowledge, then, is the informed assimilation of the information. And the cognitive conclusion is the understanding.

Thus Data –> Information –> Knowledge –> Understanding –> Wisdom through the judicious use of:

  • data acquisition,
  • data retention,
  • data transition,
  • knowledge mining,
  • cognitive processing, and
  • the application of this combined set into a proactive and forward action plan.

%d bloggers like this: