Meniu

Can we talk about the reliability of information?

The reliability of a system is defined as an ability of the system to operate according to the technical book and without defects, a certain time interval. Expected duration, from the moment of commissioning until the first failure, being called the life of the system. Reliability is estimated numerically by the probability that the system will operate without failures until the moment t> 0. Probability calculated by the designer or executor based on the characteristics of the system components and the quality control results. In order to establish the reliability of a device and its life, the defects that may occur, the operating errors are carefully analyzed and prophylactic methods are introduced to prevent them.

The life of the system, in most cases, is determined by the least reliable components that make up the system, or by the failure of the most requested component. The designer corrects such a situation by reservation, ie by adding additional elements to the minimum necessary to perform the functions. The system will incorporate spare (redundant) elements that work in parallel, or that come into operation only when the basic element "gets tired". Similar to delivering cars with a spare wheel.

Computer reliability analysis led to the extension of this study to software products, to the definition and analysis of software reliability ("Software Reliability Engineering and Testing": John D. Musa http://members.aol.com/JohnDMusa, as well as Ramon V. Leon - http: //web.utk.rdu/~leon).

Analyzing, detecting and correcting errors in software packages, bugs, is a common activity in any software company. Over the years, various procedures have been tried to automate the detection and correction of errors, the development of software testing mechanisms. A Djikstra pansy remained famous that reads like this: "Any test program can highlight at most the presence of an error, never the absence of error."

The transition from one operating system to another, from one programming language to another, with the continuous amplification of hardware configurations and the increase of computing speed, in the conditions of the extended use of computers, have led to automatic transcription solutions and studies on engineering the reuse of programs, revitalizing programs that had "died" with the removal of the computers for which they had been made. In fact, new versions of applications appear, adapted to new operating systems and new hard configurations, stable versions that sometimes last a few years, sometimes less stable, are replaced by new versions, more complete and more appropriate to new conditions and new users.

In software companies there are rules on programming work, and quality conditions, which set the average errors per thousand lines of code, the average number of errors that can be corrected in the testing stages, and the percentage of possible undetected errors. The degree of acceptance of these errors depends on the destination of the system and of course influences the selling price of the program product, or of the system as a whole. The requirements are different for a process computer, made for a nuclear power plant or for a space program, compared to a personal - domestic computer. We talk about the operational safety of the program and the entire system and accept that a program has a limited lifespan due to errors (for example, the detection of "security holes"), or the appearance of a new, better version.

But what would be the elements of reliability for information on the Internet? What are the defects or errors that may occur in the handling of this information and how many of them are directly related to the content? Can we delimit the defects that directly concern the files that include certain information, from those that affect the servers in the management of which those files are located? Can we determine a lifetime specific to the information, or is its lifetime given only to the server that "publishes" it?

And in the case of files accessible via FTP, and on the archives of discussion groups, the information is often kept even after it loses its actuality or importance. The same often happens with web pages. Except for some incidents or catastrophes, or trivial situations such as giving up the subscription, or the disk space crisis. We can also imagine situations in which the pedantic administrator of the server (Web or FTP) conscientiously analyzes the catalog files and, finding that the public interest for those documents has disappeared, assumes the responsibility of deleting them from the server.

The more natural, but too rare, situation is when the author of the document returns to the content, editing the file, modifying it, updating it, checking and updating the links, changing the presentation form, or responding to any critical remarks and changing, possibly, and the name - the title of the document. But the author, together with his readers, will find that the old version, long gone from the server disk, persists in the memory of search engines, appearing among the results of thematic searches, along with the new version. Moreover, following a project started in 1996, an impressive archive in California, saves and maintains for posterity significant documents from the entire public space (Internet Archive Wayback Machine http://web.archive.org, project started by Alexa Inc. - http: // www.

Just as a program works every time it is called or launched and we count its life time by the number of runs, as well as the life of a document, a text file on the Internet, we can measure it - count by the number of accesses or browses . Even if, intuitively, we would be inclined to look at the period in which the file is placed available to the public, on the Web server, as a lifetime, considering the number of accesses and references (links to the document in question, from other Web pages) as a measure of interest in its content. But placing a file in the public space, among other documents on a Web server, does not yet mean 'putting it into operation'. It will only become known when in one of the index pages, or on a route already circulated, we will enter indicators, links to that document, recommending it in a brief presentation. Or when it gets indexed by search engines. In order to speed up and support this indexing process, there are numerous recipes, regarding, on the one hand, the introduction of special records (meta-statements with key terms), and on the other hand, the registration of the document or the Web center at as many as hundreds of thousands of search engines.

John Doe

Articole publicate de la contributori ce nu detin un cont pe gnulinux.ro. Continutul este verificat sumar, iar raspunderea apartine contributorilor.
  • | 340 articole

Nici un comentariu inca. Fii primul!
  • powered by Verysign