Posted tagged ‘Reliability’

Characteristics of quality software

February 12, 2012

A repost of my son’s blog – July 2009
http://blog.intentdriven.com/2009/07/characteristics-of-quality-software.html

Software Quality

Software Quality is a concept has been discussed and defined in a number of excellent books and articles. Granular-Level specific characteristics are numerous, and the weight placed on one aspect may differ from company to company, or even from project to project. However, with the assistance of the Pfleeger and Atlee text (2006) and a text from Selby and Boehm (2009), we can examine several generic properties which are relatively universal.

  • Portability
    This is a measure of how the degree of coupling with other software or hardware. Can the software be easily installed and transferred, or is there a complicated integration with 3rd-parties (eg. SQL Server, or a special hardware key-dongle)?
  • “AS-IS” utility
    Does the software require heavy customization once it is deployed to the customer?
    (ie. Reliability, Efficiency, Human Engineering)
  • Maintainability
    In 2 years, will we be able to fix a problem or add new functionality?
    (ie. Testability, Understandability, Modifiability)

Human Factors

Bernard suggests that the most basic reason for an implementation to fail is due to inadequate training and preparation of the operators of the system. Having been involved in several different implementations of new software, I have seen both well-prepared and inadequately-prepared staff try to deal with new software. I would venture to say that Bernard is exactly right in saying that improper training is a huge reason why software does not succeed. It is my experience that users with a stake in the company don’t WANT to see software fail, but they will unintentionally sabotage the new initiative with “Well we always did it the other way” attitudes, if they don’t have a good reason to make the change.

References

Bernard, A. (December 26, 2003). Why Implementations Fail: The Human Factor.

Boehm, B. W. Quantitative Evaluation of Software Quality. In R. W. Selby, Ed. Software Engineering (p. 27). IEEE. Retrieved July 11, 2009, from Google Books.

Pfleeger, S. L. & Atlee, J. M. (2006). Why Software Engineering. Software Engineering Theory and Practice (3rd ed. pp. 9-11). Upper Saddle River, NJ: Pearson Prentice Hall.

Advertisements

Protecting Digital Identities, Part 1

January 31, 2011

A Digital Identity is the mechanism used to identify an individual to computers, networks, the internet, and social media. In a general case, digital identity is the digital fingerprint of an individual – or of an entity other than an individual – in either case, it is generically called the Digital Subject. But whatever it is, it consists of properties, relationships, attributes and authentication.

Properties are the characteristics of the digital subject. Within Facebook, properties may include name, age, marital status. Within a corporate network, the properties may include employment date, withholding exemptions, supervisor.

Relationships are the correlation between digital subjects. Within Facebook, relationships include friends, family, schools, employers, and special interests. Within the corporate environment, relationships refer to directory access rights, functional groups, etc.

Attributes are special characteristics of the digital subject and are not too different from properties. An attribute includes login name, password, home server. Generally, attributes are not shared outside the digital authority.

Authentication is the process for verifying the legitimacy of the digital subject. Generally username and password is the first line of defense. But authentication includes:

  • what you know (password)
  • what you have (passkey)
  • who you are (fingerprint, retina)
  • what you can do (this is relatively new and is generally seen in the form of captcha)

The protection of digital identity must address many facets. And the laws, ethics, and policies surrounding these protections do not encompass all aspects nor do they form a seamless shield.

As the digital identity becomes more and more integral to the existence of people in moderns societies, the protection and reliability of the digital identity becomes paramount.

Protecting the authentication. Authentication protection is the responsibility of both the digital subject and the central account store. And this responsibility is frequently substandard. Obviously the digital subject has shown laziness and disregard toward passwords in numerous scenarios. People tend to only use a couple passwords making their entire digital life accessible once a single account store has been violated. But within the central account store passwords may be kept in unencrypted form, they may be encrypted in a breakable two-direction cypher, or they may be broken through simple, brute-force dictionary comparisons. By far, the best solution is many passwords that use a combination of lowercase, uppercase, numbers, and symbols. But these are nearly impossible to remember.

Protecting the data. All of the protection of the authentication is meaningless if the digital data itself is unprotected. Unencrypted social security numbers, addresses, credit card numbers remain pervasive throughout the commercial industries. Remarkably, the medical community is making significant progress toward true information security. This progress is accomplished through the disappearance of paper records and the integration of digital-only records. The significance of this is that any view of the records requires 1) and authenticated user and 2) tracking of all access. (Three hospital employees were fired for improperly accessing the shooting victims in Arizona.)

Ensuring reliability. Safe and authenticated data is meaningless if not accurate. And accuracy has not received the level of attention as authentication and protection. Mistyped court records, un-updated address and employment records are the examples. Invalid properties, relationships, and attributes will cost money, cost jobs, cost relationships, cost productivity, etc. And typically, no one is held responsible. But the inaccuracies affect all of us.

Summary. Digital identities require a multifaceted oversight. failure of any level of protection, accountability, of reliability will render the records useless and affect the lives of many people. As the inventiveness of the nefarious groups improve, so must the determination of the shepherds of the data.

The Digital Oilfield, Part 2

January 30, 2011

(originally posted on blogspot January 18, 2010)

“The improved operational performance promised by a seamless digital oil field is alluring, and the tasks required to arrive at a realistic implementation are more specialized than might be expected.” (http://www.epmag.com/archives/digitalOilField/1936.htm)

Seamless integrated operations requires a systematic view of the entire exploration process. But the drilling operation may be the largest generator of diverse operational and performance data and may produce more downstream data information than any other process. Additionally, the drilling process is one of the most legally exposing processes performed in energy production – BP’s recent Gulf disaster is an excellent example.

The seamless, integrated, digital oilfield is data-centric. Data is at the start for the process, and data is at the end of the process. But data is not the objective. In fact, data is an impediment to information and knowledge. But data is the base of the information and knowledge tree – data begets information, information begets knowledge, knowledge begets wisdom. Bringing data up to the next level (information) or the subsequent level (knowledge) requires a systematic and root knowledge of the data available withing an organization, the data which should be available within an organization, and the meaning of that data.

Data mining is the overarching term used in many circles to define the process of developing information and knowledge. In particular, data mining is taking the data to the level of knowledge. Converting data to information is often no more complex that producing a pie chart or an x-y scatter chart. But that information requires extensive operational experience to analyze and understand. Data mining takes data into knowledge tier. Data mining will extract the tendencies of operational metrics to fortell an outcome.

Fortunately, there are several bright and shinning examples of entrepreneurs developing the data-to-knowledge conversion. One bright and promising star is Verdande’s DrillEdge product (http://www.verdandetechnology.com/products-a-services/drilledge.html). Although this blog does not support or advocate this technology as a matter of policy, this technology does illustrate an example of forward thinking and systematic data-to-knowledge development.

A second example is PetroLink’s modular data acquisition and processing model (http://www.petrolink.com). This product utilizes modular vendor-agnostic data accumulation tools (in particular interfacing to Pason and MD-TOTCO), modular data repositories, modular equation processing, and modular displays. All of this is accomplished through the WITSML standards (http://en.wikipedia.org/wiki/WITSML).

Future blogs will consider the movement of data, latency, reliability, and synchronization.


%d bloggers like this: