Laboratory Organisation, Management and Safety


Acknowledgement

The authors wish to acknowledge the major contribution of Dr Mitchell Lewis, the author or co-author of this chapter in previous editions.

The services provided by laboratories are an essential and fundamental component of health systems across the globe. The essential functions of a haematology laboratory are (1) to provide clinicians with timely, unambiguous and meaningful information to assist in the clinical diagnosis of disease and to monitor response to treatment; (2) to obtain reliable and reproducible data for health screening and epidemiological studies; and (3) to keep abreast with advancing technology as well as aspects of healthcare legislation that might be relevant to modern laboratory practice. The laboratory should also be involved in both the pre-analytical stage (i.e. test selection, blood collection, specimen transport) and the postanalytical stage (i.e. preparing reports, transmission of results and maintaining a data file).

For good laboratory practice, it is essential to have a well-structured organisation with competent direction and management. The principles outlined in this chapter apply to all laboratories, irrespective of their size, although large departments are likely to require the more complex arrangements that are described.

Management structure and function

The management structure of a haematology laboratory should indicate a clear line of accountability of each member of staff to the head of department. In turn, the head of department may be managerially accountable to a clinical director (of laboratories) and thence to a hospital or health authority executive committee. The head of department is responsible for departmental leadership, for ensuring that the laboratory has authoritative representation within the hospital and for ensuring that managerial and administrative tasks are performed efficiently. Where the head of the department delegates managerial tasks to others, these responsibilities must be clearly defined and stated. Formerly, the director was usually a medically qualified haematologist, but nowadays in many laboratories, this role is being undertaken by appropriately qualified biomedical scientists, while haematologists serve as consultants. In that role, haematologists should be fully conversant with the principles of laboratory practice, especially with interpretation and clinical significance of the various analytical procedures, so as to provide a reliable and authoritative link between the laboratory and clinicians. Furthermore, all medical staff, especially junior hospital doctors, should be invited to visit the laboratory, to see how it functions and how various tests are performed; they should gain an understanding of the level of complexity of tests, their clinical utility and their cost, which should give them the ability and confidence to order tests rationally.

Management of the laboratory requires an executive committee answerable to the head of department. Under this executive, there should be a number of designated individuals responsible for implementing the functions of the department ( Table 24-1 ).

Table 24-1
Example of components of a management structure
Executive committee
Head of department
Business manager
Consultant haematologist
Principal scientific officer
Safety officer
Quality control officer
Computer and data processing supervisor
Sectional scientific/technical heads
Cytometry
Blood film morphology
Immunohaematology
Haemostasis
Blood transfusion
Special investigations (haemolytic anaemias, haemoglobinopathies, cytochemistry, molecular techniques, etc.)
Clerical supervisor

The activities of the various members of staff clearly overlap and there must be adequate effective communication between them. There should be regular briefings at meetings of technical heads, with their section staff. The only way to avoid unauthorised ‘leakage’ of information from policy-making committees is to ensure that all members of staff are kept fully informed of any plans which might have a bearing on their careers, working practices and wellbeing.

In many countries, there are now requirements established by regulatory agencies for accreditation of laboratories and audit of their performance, as well as documents on laboratory management and practice from standards- setting authorities; there is also a plethora of guidelines from national and international professional bodies. These may have a profound impact on the broader organisation of a laboratory. For example, the National Institute for Health and Care Excellence has responded to well documented concerns regarding the accuracy of diagnosis of haematological malignancies by mandating formal integration of the affected laboratory services into a single laboratory structure, with its own lead and governance structure. At a single stroke this guidance eliminates the possibility of a single-handed scientist, haematologist or histopathologist reporting on the presence or absence of a haematological malignancy without being part of a larger laboratory organisation that accepts responsibility for the internal validation and cross-checking of results. These changes are likely to require significant reconfiguration of existing services and better collaboration between haematologists, histopathologists, cytogeneticists and molecular geneticists.

Staff appraisal

All members of staff should receive training to enhance their skills and to develop their careers. This requires setting of goals and regular appraisal of progress for both managerial and technical ability. The appraisal process should cascade down from the head of department and appropriate training must be given to those who undertake appraisals at successive levels. The appraiser should provide a short list of topics to the person to be interviewed, who should be encouraged to add to the list, so that each understands the items to be covered. Topics to be considered should include: quality of performance and accurate completion of assignments; productivity and dependability; ability to work in a team; and ability to relate to patients, clinicians and co-workers. It is not appropriate to include considerations relating to pay. An appraisal interview should be a constructive dialogue of the present state of development and the progress made to date; it should be open-ended and should identify future training requirements. Ideally, the staff members should leave the interviews with the knowledge that their personal development and future progress are of importance to the department, that priorities have been identified, that an action plan with milestones and a time scale has been agreed and that progress will be monitored. Formal appraisal interviews (annually for senior staff and sometimes more often for others) should be complemented by less formal follow-up discussions to monitor progress and to check that suboptimal performance has been modified. Performance appraisal can have lasting value in the personal development of individuals, but the process can easily be mishandled and should not be started without training in how to hold an appraisal interview.

Continuing professional development

Continuing professional development is a process of continuous systematic learning which enables health workers to be constantly brought up to date on developments in their professional work and thus ensure their competence to practice throughout their professional careers. Policies and programmes have been established in a number of countries and, in some, participation is a mandatory requirement for the right to practice.

In the UK, haematologists and clinical scientists who have the relevant qualifications awarded by the Royal College of Pathologists (RCPath) are licenced to practice by the General Medical Council. They are required to participate in a scheme organised by the College that involves maintenance of a portfolio showing their participation in relevant educational and academic activities and demonstration of their professional skills.

The Institute for Biomedical Science undertakes a similar scheme for scientists/technologists working in the laboratory, which is mandatory for registration to practice by the UK Health Professions Council. The procedure is based on obtaining ‘credits’ for various activities that qualify, such as attendance at specified lectures, workshops and conferences; giving lectures; writing books or journal articles; using journal-based programmes and taking part in peer review discussions.

Strategic and business planning

The head of department is responsible for determining the long-term (usually up to 5 years) strategic direction of the department. Strategic planning requires awareness of any national and local legislation that may affect the laboratory and of changes in local clinical practice that may alter workload. Expansion of a major clinical service, such as organ transplantation, or the opportunity to compete for the laboratory service of other hospitals and clinics, may pose an external opportunity, but may also be a threat to the laboratory, depending on its ability to respond to the consequential increase in workload. Technical or scientific expertise would be a strength, whereas a heavy workload without adequate staffing, or a lack of automation for routine tests, is likely to preclude any additional developmental work and would, thus, be a weakness.

Increasingly, laboratories must meet financial challenges and the need for greater cost-effectiveness. This may require rationalization by eliminating unused laboratory capacity, avoiding unnecessary tests and ensuring more efficient use of skilled staff and expensive equipment. This may require centralisation of multiple laboratory sites or, conversely, the establishment of satellite centres for the benefit of patients and clinicians when this can be shown to be cost-effective. Account must also be taken of the role of the laboratory in supervision of the extra- laboratory point-of-care procedures that have become increasingly popular.

A business plan is primarily concerned with determining short-term objectives that will allow the strategy to be implemented over the next financial year or so. It requires prediction of future work level and expansion. Planning of these objectives should involve all staff because this will heighten awareness of the issues and will develop personal concern in the strategy. In all but the smallest laboratory, a business manager is required to coordinate such planning and to liaise with the equivalent business managers in other clinical and laboratory areas.

The largest and most important strategic review of pathology services ever undertaken in the UK concluded that the principle mechanism for performance improvement in laboratories was reconfiguration of individual laboratories into managed pathology networks. This allows for improvements in the efficiency and the quality of the laboratory service provided. The cost-per-test frequently falls as a service carries out more tests, as laboratories develop staff and process expertise (especially in less frequently needed tests) and as throughput increases. A pathology network may be arranged in many ways but is often in the form of ‘hub-and-spoke’ arrangement where a local, rapid response ‘spoke’ laboratory provides those tests that are required to support acute services (e.g. full blood count, basic electrolytes, clotting profile, blood transfusion) while the ‘hub’ or core laboratory provides most of the more specialised pathology service. The hub may be located away from the location of the clinical service, provided that communication links are maintained between the laboratory and clinicians.

Workload assessment and costing of tests

Laboratories should maintain accurate records of workload, overall costs and the cost per test in order to apportion resources to each section. Computerisation of laboratories has greatly facilitated this process. In assessing workload, account must be taken of the entire cycle from specimen receipt to issuing of a report, whether the test is by a manual or semiautomated method or by a high-volume multiple-analyte automated analyser. Apportioning of resources should also take account of the roles of biomedical scientists/senior technologists, junior technicians, laboratory assistants, clerical staff and medical personnel responsible for reviewing the report. Out of hours service requires a different calculation of costs.

Methods have been developed for determining the workload and costs for various laboratory tests taking account of test complexity, total number of tests performed, quality control procedures, cost of reagent and use of material standards so that laboratories can compare their operational productivity with a peer group of participating laboratories. A good example is given on the website Standards for Management Information Systems in the Canadian Health Service Organisations ( www.cihi.ca/en/data-and-standards/standards/mis-standards/standards- for-management-information-systems-in-canadian) .

A similar workload recording method was published by the College of American Pathologists, and the Welcan system was established in the UK. However, more recently, benchmarking schemes have been established that take account of productivity, cost-effectiveness and utilisation compared with a peer group. The College of American Pathologists created their Laboratory Management Index Program in which participants submit their laboratories’ operating data on a quarterly basis and receive peer comparison reports from similar laboratories around the country by which their own cost-effectiveness can be evaluated.

Financial control

Full costing of tests includes all aspects of laboratory function ( Table 24-2 ).

Table 24-2
Factors contributing to cost of laboratory tests
Direct costs
Staff salaries
Laboratory equipment purchases
Reagents and other consumables
Equipment maintenance
Standardisation and quality control
Specific technical training on equipment
Indirect costs
Capital costs and mortgage factor
Depreciation
Building repairs and routine maintenance
Lighting, heating and waste disposal
Personnel services
Cleaning services
Transport, messengers and porters
Laundry services
Computers and information technology
Telephone and fax
Postage
Journals and textbooks

The amount allocated for staff salaries should include the cost of training and should take into account absences for annual leave, study leave or sickness. It needs also to take into account the extent to which staff of various levels, as described earlier, are involved. Indirect costs may be apportioned to different sections of a department that share common overhead costs.

Calculation of test costs

When preparing a budget, the following formula provides a reasonably reliable estimate of the total annual costs:


L × N + C × N + E + M + O + S + T + A

where

  • L = Labour costs for each test from estimate of time taken and the salary rate of the staff member(s) performing the tests

  • N = Number of tests in the year

  • C = Cost of consumables per test (including controls)

  • E = Annual equipment cost based on initial cost divided by expected life of the item or the annual cost of hire (see below)

  • M = Annual maintenance and servicing of equipment

  • O = Laboratory overheads ( Table 24-2 )

  • S = Supervision

  • T = Transport and communication

  • A = Laboratory administration, including salaries of clerical and other nontechnical staff.

Efficient budgeting requires regular monitoring, at least monthly. Computer spreadsheets provide an easily comprehended view of the financial state and the likely responses in the running of the laboratory.

In general, staff cost is by far the largest component of the total costs of running a laboratory. Furthermore, many of the other costs are obligations outside the direct control of the laboratory. If financial savings become necessary, they can be achieved in a variety of ways, but large savings usually necessitate a reduction in staff because employment costs can account for three-quarters of total expenditure. Possible initiatives include the following:

  • Rationalisation of service with other local hospitals to eliminate duplication

  • Restructuring within a hospital laboratory for cross-discipline working (usually between haematology and clinical chemistry)

  • Subcontracting of labour-intensive tests to a specialist laboratory

  • Greater use of automated instruments/methods

  • Employment of part-time contract staff (e.g. for overnight and weekend emergency service or for the phlebotomy service) and sharing of emergency service between local hospitals

  • Review of price setting on the basis of workload and calculated cost per test.

Increasingly, use of automated systems for routine screening tests allows the laboratory to consider staff reduction, although an estimate of savings must take account of capital costs, maintenance contracts and running costs of the equipment, especially the high cost of some reagents, and whether the system can be used to high capacity and throughout a 24 h service.

Purchasing expensive equipment outright adds to the capital assets of the laboratory, with the consequential cost of depreciation (usually 8–10% per annum). Leasing equipment can be a better alternative, and in many countries most equipment is obtained in this way. Careful calculation of the lease cost is required because this can be up to 20% higher than outright purchase. An advantage of leasing is flexibility to upgrade equipment should workload increase or technology change. If maintenance and consumable costs are included in the same agreement, it may be possible to negotiate a reduction in charge for the consumables, but it is important to neither underestimate nor overestimate the annual requirements that will be included in the contract.

When automation is coupled with centralisation of the service to another site, care must be taken to maintain service quality. Failure to do so will encourage clinicians to establish independent satellite laboratories. Loss of contact between clinical users and laboratory staff may compromise the pre-analytical phase of the test process and may lead to inappropriate requests, excessive requests and test samples that are of inadequate volume or are poorly identified. When services are centralised, attention must be paid to all phases (pre-analytical, analytical and postanalytical) of the test process, including the need for packaging the specimens and the cost of their transport to the laboratory.

Test reliability

The reliability of a quantitative test is defined in terms of the uncertainty of measurement of the analyte (sometimes referred to as ‘measurand’). This is based on its accuracy and precision.

Accuracy is the closeness of agreement between the measurement that is obtained and the true value; the extent of discrepancy is the systematic error or bias. The most important causes of systematic error are listed in Table 24-3 . The error can be eliminated or at least greatly reduced by using a reference standard with the test, together with internal quality control and regular checking by external quality assessment (see Chapter 25 , p. 539).

Table 24-3
Systematic errors in analyses
Analyser calibration uncertain (no reference standard available)
Bias in instrument, equipment or glassware
Faulty dilution
Faults in the measuring steps (e.g. reagents, spectrometry, calculations)
Sampling not representative of specimen
Specimens not representative of in vivo status
Incomplete definition of analyte or lack of critical resolution of analyser
Approximations and arbitrary assumptions inherent in analyser’s function
Environmental effects on analyser
Pre-analytical deterioration of specimens

Precision is the closeness of agreement when a test is repeated a number of times. Imprecision is the result of random errors; it is expressed as standard deviation (SD) and coefficient of variation (CV). When the data are spread normally (Gaussian distribution), for clinical purposes, there is a 95% probability that results that fall within a range of + 2SD to − 2SD of the target value are correct and a 99% probably if within the range of + 3SD to − 3SD (see also Fig. 2-1 ).

Some of the other factors listed in Table 24-3 can be quantified to calculate the combined uncertainty of measurement. Thus, for example, when a calibration preparation is used, its uncertainty is usually stated on the label or accompanying certificate. The standard uncertainty is then calculated from the sum of the quantified uncertainties as follows:


SD 1 2 + SD 2 2

Expanded uncertainty of measurement takes account of nonquantifiable items by multiplying the previous amount by a ‘coverage factor’ (k), which is usually taken to be × 2 for 95% level of confidence. ,

It may be necessary to decide by statistical analysis whether two sets of data differ significantly. The t -test is used to assess the likelihood of significant difference at various levels of probability by comparing the means or individually paired results. The F -ratio is useful to assess the influence of random errors in two sets of test results (see Appendix, p. 566).

Of particular importance are reports with ‘critical laboratory values’ that may be indicative of life-threatening conditions requiring rapid clinical intervention. Haemoglobin concentration, platelet count and activated partial thromboplastin time have been included in this category. The development of critical values should involve consultation with clinical services.

Test selection

It is important for the laboratory to be aware of the limits of accuracy that it achieves in its routine performance each day as well as day-to-day. Clinicians should be made aware of the level of uncertainty of results for any test and the potential effect of this on their diagnosis and interpretation of response to treatment (see below).

To evaluate the diagnostic reliability and predictive value of an individual laboratory test, it is necessary to calculate test sensitivity and specificity. Sensitivity is the fraction of true positive results when a test is applied to patients known to have the relevant disease or when results have been obtained by a reference method. Specificity is the fraction of true negative results when the test is applied to normals.


Diagnostic sensitivity = TP ÷ TP + FN Diagnostic specificity = TN ÷ TN + FP Positive predictive value = TP ÷ TP + FP Negative predictive value = TN ÷ TN + FN

where TP = true positive; TN = true negative; FP = false positive; FN = false negative.

Overall reliability can be calculated as:


TP + TN Total number of tests × 100 %

Sensitivity and specificity should be near 1.0 (100%) if the test is regarded as diagnostic for a particular condition. A lower level of sensitivity or specificity may still be acceptable if the results are interpreted in conjunction with other tests as part of an overall pattern. It is not usually possible to have both 100% sensitivity and 100% specificity. Whether sensitivity or specificity is more important depends on the particular purpose of the test. Thus, for example, if haemoglobinometry is required in a clinic for identifying patients with anaemia, sensitivity is important, whereas in blood donor selection, for selecting individuals who are not anaemic, specificity is more important.

Likelihood ratio

The ratio of positive results in disease to the frequency of false-positive results in healthy individuals gives a statistical measure of the discrimination by the test between disease and normality. It can be calculated as follows:


Sensitivity 1 Specificity

The higher the ratio, the greater is the probability of disease, whereas a ratio < 1 makes the possibility of the disease being correctly diagnosed by the test much less likely. Conversely, the likelihood of normality can be calculated as:


1 Specificity Sensitivity

An alternative method is that of Youden, which is obtained by calculating Specificity/(1 − Sensitivity) (see p. 542). Values range between − 1 and + 1. With a positive ratio rising above zero towards + 1 there is an increasing probability that the test will discriminate the presence of the specified disease and there is decreasing likelihood that the test is valid when the ratio falls from 0 to − 1.

Receiver–operator characteristic analysis

The relative usefulness of different methods for the same test or of a new method against a reference method can also be assessed by analysing the receiver–operator characteristics (ROC). This is demonstrated on a graph by plotting the true-positive rates ( sensitivity ) on the vertical axis against false-positive rates (1 − specificity ) on the horizontal axis for a series of paired measurements ( Fig. 24-1 ). Obviously, the ideal test would show high sensitivity (i.e. 100% on vertical axis), with no false positives (i.e. 0% on horizontal axis). Realistically, there would be a compromise between the two criteria, with test selection depending on its purpose, (i.e. whether as a screening to exclude the disease in question or to confirm a clinical suspicion that the disease is present). In the illustrated case, Test A is more reliable than Test B in both circumstances.

Figure 24-1, Receiver–operator characteristic (ROC) analysis.

Test utility

To ensure reliability of the laboratory service, tests with no proven value should be eliminated and new tests should be introduced only when there is evidence of technical reliability as well as cost-effectiveness.

For assessing cost-effectiveness of a particular test, account must be taken of (1) cost per test as compared with other tests that provide similar clinical information; (2) diagnostic reliability; and (3) clinical usefulness as assessed by the extent with which the test is relied on in clinical decisions, whether the results are likely to change the physician’s diagnostic opinion and the clinical management of the patient, taking account of disease prevalence and a specified clinical or public health situation. This requires audit by an independent assessor to judge what proportion of the requests for a particular test are actually used intelligently and what percentage are unnecessary or wasted tests. , Information on the utility of various tests can also be obtained from benchmarking (see p. 526) and published guidelines. Examples of the latter are the documents published by the British Committee for Standards in Haematology ( http://www.bcshguidelines.com/) . The realistic cost-effectiveness of any test may be assessed by the formula:

  • A/(B × C), where A = cost/test, as described on p. 515

  • B = diagnostic reliability, as described on p. 514

  • C = clinical usefulness, as described above.

Economic aspects should also be considered when providing an automated total screening programme for every patient, in contrast to specifically selected tests. Thus, while many clinicians may not be familiar with all of the 12 or more parameters included in the blood count as reported routinely by modern automated analysers, and in most cases, some of these measurements are unlikely to be clinically useful, nevertheless the ‘not requested’ information may be provided at no extra cost and even significant saving of time in the laboratory. In addition, this ‘not requested’ information will be meaningful to some users, including haematologists who might be consulted about a patient.

Instrumentation

Equipment evaluation

Assessment of the clinical utility and cost-effectiveness of equipment to match the nature and volume of laboratory workload is a very important exercise. Guidelines for evaluation of blood cell analysers and other haematology instruments have been published by the International Council for Standardisation in Haematology. In the UK, appraisal of various items of laboratory equipment was formerly undertaken by selected laboratories at the request of the Department of Health’s Medical Devices Agency, subsequently renamed Medicines and Healthcare products Regulatory Agency (MHRA) and now replaced by the Centre for Evidence-based Purchasing (CEP). (Their reports can be accessed from the website http://nhscep.useconnect.co.uk ).

Principles of evaluation

The following aspects are usually included in evaluations:

  • 1.

    Verification of instrument requirements for space and services

  • 2.

    Extent of technical training required to operate the instrument

  • 3.

    Clarity and usefulness of instruction manual

  • 4.

    Assessment of safety (mechanical, electrical, microbiological and chemical)

  • 5.

    Determination of the following:

    • a.

      Linearity

    • b.

      Precision/imprecision

    • c.

      Carryover

    • d.

      Extent of inaccuracy by comparison with measurement by definitive or reference methods

    • e.

      Comparability with an established method used in the laboratory

    • f.

      Performance when used in an external quality control scheme

    • g.

      Sensitivity (i.e. determination of the smallest change in analyte concentration that gives a measured result)

    • h.

      Specificity (i.e. extent of errors caused by interfering substances)

  • 6.

    Throughput time and number of specimens that can be processed within a normal working day

  • 7.

    Reliability of the instrument when in routine use and adequacy of service and maintenance provided

  • 8.

    Cost per test, including operating time, reagents, daily start-up procedure and regular (usually weekly) maintenance procedures

  • 9.

    Staff acceptability, impact on laboratory organisation and level of technical expertise required to operate the instrument

  • 10.

    Any relevant authority label, e.g. CE mark on a device indicates that it conforms to defined specifications of the EU directive 98/79 for in vitro diagnostic medical devices (IVDD), as described in the Official Journal (OJ) of the EC: 7.12.98 ( http://ec.europa.eu/growth/single-market/european-standards/harmonised- standards/medical-devices/index_en.htm ).

After an instrument has been purchased and installed, it is useful to undertake regular less extensive checks of performance with regard to precision, linearity, carryover and comparability.

Precision

Carry out appropriate measurements 10 times consecutively on three or more specimens selected to extend into the pathological range so as to include a low, a high and a middle range concentration of the analyte. Calculate the replicate SD and CV as shown on p. 566. The degree of precision that is acceptable depends on the purpose of the test ( Table 24-4 ). To check between-batch precision, measure three samples in several successive batches of routine tests; calculate the SD and CV in the same way.

Table 24-4
Test precision for different purposes
Purpose of test Expected CV% (Automated Counters)
Hb RBC WBC
Scientific standard < 1 1 1–2
State of art:
Best performance 1.5 2 3
Routine laboratories 2–3 3 5–6
Clinical needs 5–10 5 10–15
CV, coefficient of variation; Hb, haemoglobin concentration; RBC, red blood cell count; WBC, white blood cell count.

Linearity

Linearity demonstrates the effects of dilution. Prepare a specimen with a high concentration of the analyte to be tested and, as accurately as possible, make a series of dilutions in plasma so as to obtain 10 samples with evenly spaced concentration levels between 10% and 100%. Measure each sample three times and calculate the means. Plot results on arithmetic graph paper. Ideally, all points should fall on a straight line that passes through the zero of the horizontal and vertical axes. In practice, the results should lie within 2SD limits of the means calculated from the CVs, which have been obtained from analysis of precision (see earlier). Inspection of the graph will show whether there is linearity throughout the range or whether it is limited to part of the range.

Carryover

Carryover indicates the extent to which measurement of an analyte in a specimen is likely to be affected by the preceding specimen. Measure a specimen with a high concentration in triplicate, immediately followed by a specimen with a low concentration of the analyte:


Carryover % = l 1 l 3 h 3 l 3 × 100

where l 1 and l 3 are the results of the first and third measurements of the samples with a low concentration and h 3 is the third measurement of the sample with a high concentration.

Accuracy and comparability

Accuracy and comparability test whether the new instrument (or method) gives results that agree satisfactorily with those obtained with an established procedure and with a reference method. Test specimens should be measured alternately, or in batches, by the two procedures. If results by the two methods are analysed by correlation coefficient (r), a high correlation does not mean that the two methods agree. Correlation coefficient is a measure of relation and not agreement. It is better to use the limits of agreement method. For this, plot the differences between paired results on the vertical axis of linear graph paper against the means of the pairs on the horizontal axis ( Fig. 24-2 ); differences between the methods are then readily apparent over the range from low to high values. If the scatter of differences increases at high values, logarithmic transformed data should be plotted.

Figure 24-2, Limits of agreement method.

It is also useful to check for bias by including the instrument or method under test in the laboratory’s participation in an external quality assessment scheme (see p. 539). Bias is expressed by:


R M M × 100

where R = measurement by the device/method being tested and M = target result.

Another method to check for bias is by means of the variance index. For this, the coefficient of variation is established at an optimal chosen value (CCV) to ensure a reliable method and the variation index (VI%) is calculated as:


R M M × 100 CCV × 100

You're Reading a Preview

Become a Clinical Tree membership for Full access and enjoy Unlimited articles

Become membership

If you are a member. Log in here