Penny Rounding Problem

Posted February 10, 2012 by ProfReynolds
Categories: Information Systems, Lonestar College, Software Development

Tags:

A computer rounding problem that I like to call “The Penny Rounding Problem” has been around for many, many years. At least two movies have been made with this problem a core element: The Office, and Superman III. The basic problem is that a column of numbers should add up to the total at the bottom. But they do not.

Mark Reynolds is currently at Southwestern Energy where he works in the Fayetteville Shale Drilling group as a Staff Drilling Data Analyst. In this position, he pulls his experiences in data processing, data analysis, and data presentation to improve Southwestern Energy’s work in the natural gas production and mid-stream market.

Recently, Mark has been working toward improved data collection, retention, and utilization in the real-time drilling environment.

www.ProfReynolds.com

For example: 1/3 is represented as .33, or even .333. But if you add .33 together 3 times, you get .99, not 1.00 – a penny off. This is why your final mortgage payment (if you ever actually paid it off) is never exactly the same as the monthly amount. Even worse, take 2/3 or .67. Multiple .66666… by 3 and you get 2.00; multiply .67 by 3 and you get 2.01.

Solving the problem is relatively simple, but requires diligence. Individual calculations must be individually rounded to the correct number of decimal places.

When I teach Excel at the college, I require the student to explicitly ROUND the answer to any mathematical operation involving

  1. possible sub-penny answers (divide by three, multiply by .0475, etc.)
  2. currency
  3. down-stream use of the answer.

Taken individually — addition of two numbers will never generate sub-penny digits, non-currency measurements (weight, speed, etc) do not bother people when the totals are off by small decimal fractions, and if the result to the mathematical calculation is never to be used then no one cares.

So when an interest equation is entered into Excel
= A3 * A4 / 12,
you should change it to be
= ROUND( A3 * A4 / 12, 2 ) so that the answer is rounded to 2 decimal places.

So can Richard Pryor get rich by taking all of the rounded, fractional pennies and putting them in his account? This is called Salami Slicing and snopes calls it a legend. But do gas stations do it with your pump price? read here for the answer

Advertisements

Artificial Intelligence vs Algorithms

Posted February 9, 2012 by ProfReynolds
Categories: Data - Information - Knowledge - Understanding - Wisdom, Information Theory, Knowledge Systems, Predictive Analytics, Software Development

Tags: , , , , , ,

I first considered aspects of artificial intelligence (AI) in the 1980s while working for General Dynamics as an Avionics Systems Engineer on the F-16. Over the following 3 decades, I continued to follow the concept until I made a realization – AI is just an algorithm. Certainly the goals of AI will one day be reached, but the manifestation metric of AI is not well defined.

Mark Reynolds is currently at Southwestern Energy where he works in the Fayetteville Shale Drilling group as a Staff Drilling Data Analyst. In this position, he pulls his experiences in data processing, data analysis, and data presentation to improve Southwestern Energy’s work in the natural gas production and mid-stream market.

Recently, Mark has been working toward improved data collection, retention, and utilization in the real-time drilling environment.

www.ProfReynolds.com

Consider the Denver International Airport. The baggage handling system was state of the art, touted as AI based and caused the delay of the opening by 16 months and cost $560M to fix. (more – click here) In the end, the entire system was replaced with a more stable system based not on a learning or deductive system, but upon much more basic routing and planning algorithms which may be deterministically designed and tested.

Consider the Houston traffic light system. Mayors have been elected on the promise to apply state of the art computer intelligence. Interconnected traffic lights, traffic prediction, automatic traffic redirection. Yet the AI desired results in identifiable computer algorithms with definitive behavior and expectations. Certainly an improvement, but not a thinking machine. The closest thing to automation is the remote triggering features used by the commuter rail and emergency vehicles.

So algorithms form the basis for computer advancement. And these algorithms may be applied with human interaction to learn the new lessons so necessary to achieving behavioral improvement with the computers. Toward this objective, distinct fields of study are untangling interrelated elements – clustering, neural networks, case based reasoning, and predictive analytics are just a few.

When AI can be achieved, it will be revolutionary. But until that time, deterministic algorithms, data mining, and predictive analytics will be at the core of qualitative and quantitative advancement.

Making of a Fly

Posted February 6, 2012 by ProfReynolds
Categories: Industry and Applications, Operations, Real-Time

Tags: , , ,

While watching a TED video about algorithms, mention was made of an unrealistic price on Amazon. Apparently two retailers had an out-of-control computer feedback loop.

One company, with lots of good customer points, is in the habit of selling products a little higher than the competition. Anyone’s guess why, but facts are facts – they routinely price merchandise about 25% higher than the competition (and rely on the customer experience points to pull customers away?).

Well, the competition routinely prices merchandise a little lower than the highest priced competitor: about 1% less.

So these computer programs began a game of one-upmanship. A $10.00 product was listed for $12.70 by the first company. Later in the day, the second company’s computer listed the same product for 1% less – $12.57. So the process repeated:  $15.96 and $15.80. Then $20.07 and $1987. The process continued until the book was listed for $23,698,655.93, plus shipping. (all numbers illustrative)

This story illustrates one of the challenges to automated feedback loops. An engineering instructor once explained it – if the gain feedback is a positive value greater than 1, the feedback will either oscillate, or latch-up.

More on feedback controls for real systems another day.

Read more here: https://www.google.com/#q=making+of+a+fly

Making new IT work for the business

Posted September 23, 2011 by ProfReynolds
Categories: Digital Oilfield, Industry and Applications, Information Systems, Operations, Software Development

Tags: ,

I found an EXCELLENT article in the Digital Energy Journal by Dutch Holland. In this article he explore different strategies for transforming operational requirements into successful initiatives.

Without stealing too much of his well articulated article, the five approaches normally used are:

  • The by-the-book business analyst
  • The business-experienced analyst”
  • The businessman CIO
  • The IT expert Inside the business
  • The operations-led interface

I encourage anyone attempting to implement a operations-centric technological solution to read his article.

http://www.findingpetroleum.com/n/Making_new_IT_work_for_the_business/d1a1861b.aspx

“When trying to connect technology innovation with business, an intelligent interface between the two is required. It must be able to translate business opportunity into technical requirements; innovate, test and evaluate; and seamlessly implement new technology into the business.” ~Dutch Holland

Predictive Analytics

Posted September 9, 2011 by ProfReynolds
Categories: Data - Information - Knowledge - Understanding - Wisdom, Digital Oilfield, Knowledge Systems, Operations, Predictive Analytics

Tags: , , , ,

Predictive analytics is used in actuarial science, financial services, insurance, telecommunications, retail, travel, healthcare, pharmaceuticals and other fields (Wikipedia). But operations – manufacturing, processing, etc., have been a little slower to encompass the concept. A drilling engineer friend of mine says “just put my hand on the brake lever and I’ll drill that well”. He probably can, but few of the rest of us can, or want to.

We want to see operating parameters, performance metrics, and process trends. All this because we want to have the necessary information and knowledge to assimilate understanding and invoke our skill set (wisdom). In this scenario, we are responding to stimulus, we are applying “reactive analytics”. But systems get more complex, operations becomes more intertwined, performance expectations become razor-thin. And with this complexity grows demand for better assistance from technology. In this case, the software performs the integrated analysis and the results is “predictive analytics”. And with predictive analytics comes the close cousin: decision models and decision trees.

Sullivan McIntyre, in his article From Reactive to Predictive Analytics, makes an observation about predictive analytics in social media that is mirrored in operations:

There are three key criteria for making social data useful for making predictive inferences:

  • Is it real-time? (Or as close to real-time as possible)
  • Is it metadata rich?
  • Is it integrated?

Having established these criteria, the nature of the real-time data and the migration of historical data into real-time, predictive analytics becomes achievable.

What is Content?

Posted September 8, 2011 by ProfReynolds
Categories: Data - Information - Knowledge - Understanding - Wisdom, Information Theory, Operations

Tags: , , ,

Several internet articles and blogs address the meaning of content from an internet perspective. From this perspective, content is the (meaningful) stuff on a page, the presentation of information to the seeker.

But content within an operations-centric perspective is entirely different. And the databases and operational tools must be content data reflecting the desired information being sought in the pursuit of knowledge. Thus, paraphrasing Scottie Claiborne (http://www.successful-sites.com/articles/content-claiborne-content1.php), “content is the stuff in your operations system;  good content is useful information”.

Therefore, content is the meaningful data and the presentation of this data as information.

Content can, and should be, redundant. Not redundant from a back-up perspective; redundant from an information theory perspective – data that is inter-related and inter-correlated. (Data that is directly calculated need not be stored, however, the method of calculation may change and therefore the original calculation may prove useful.) Data that is inter-correlated may be thought of in terms of weather: wind speed, temperature, pressure, humidity, etc. are individual, measurable values but the inter-relate and perfectly valid inferences may be made in the absence of one or more of these datums. When the historical (temporal) and adjacent (geospatially) datums are brought into the content, then, according to information theory, more and more redundancy exists within the dataset.

Having identified the basis of content, the operations system designer should perform content analysis. Content analysis is both qualitative and quantitative. But careful attention to systems design and systems management will permit increased quantification of the results. What is content analysis in its most base form: the designer asking the questions “What is the purpose of the data? What outcomes are expected from the data? How will the data be imparted to produce the desired behavior?”

So how do we quantify the importance of specific data / content? How do we choose which data / content to retain? This question is so difficult to answer, the normal response is to save everything, forever. And since data not retained is data lost, and lost forever, this approach seems reasonable in a world of diminishing data storage costs. But, then, the cost and complexity of information retrieval becomes more difficult.

The concept and complexity of data retrieval is left for another day…

The Value of Real-Time Data, Part 2

Posted September 1, 2011 by ProfReynolds
Categories: Data - Information - Knowledge - Understanding - Wisdom, Predictive Analytics, Real-Time

Tags: ,

Previously, predictive analytics was summarized as “system anticipates” (https://profreynolds.wordpress.com/2011/08/31/the-value-of-real-time-data/). But that left a lot unsaid. Predictive analytics is a combination of statistical analysis, behaviour clustering, and system modeling. No one piece of predictive analytics can exist in a vacuum; the real-time system must be statistically analyzed, its behaviour grouped or clustered, and finally a system modeled that can use real-time data to anticipate the future – near term and longer.

Examples of predictive analytics in everyday life include credit scores, hurricane forecasts, etc. In each case, past events are analyzed, clustered, and then predicted.

The result of predictive analytics is, therefore, a decision tool. And the decision tree will, to some degree, take into account a predictive analysis.

The output of Predictive Analytics will be descriptive or analytic – subjective or objective. Both outputs are reasonable and viable. Looking at the hurricane predictions, there are analytical computer models (including the so-called spaghetti models) that seek to propose a definitive resulting behaviour; then there are descriptive models that seek to produce a visualization and comprehension of the discrete calculations. By extension, one can generalize that descriptive predictions must be the result of multiple analytic predictions. Perhaps this is true.

Returning to the idea that predictive analytics is comprised of statistical analysis, clustering analysis, and finally system modelling, we see that a sub-field of analytics could be considered: reactive analytics. Reactive analytics seeks to understand the statistical analysis, and even the clustering analysis, with an eye to adapt processes and procedures – but not in real-time. Reactive Analytics is, therefore, the Understanding portion of the Data-Information hierarchy (https://profreynolds.wordpress.com/2011/08/31/the-data-information-hierarcy-part-3/). Predictive Analytics is, therefore, the Wisdom portion of the Data-Information hierarchy.


%d bloggers like this: