Historical Development of the Journal Impact Factor and ... - CiteSeerX

1 downloads 0 Views 79KB Size Report
first local case report describing occupational asbestos exposure and mesothelioma .... proposal is the “Strike Rate Index”, proposed by. Barendse118). Similarly ...
Review Article

Industrial Health 2007, 45, 730–742

Historical Development of the Journal Impact Factor and its Relevance for Occupational Health Derek R. SMITH1 1International

Centre for Research Promotion and Informatics, National Institute of Occupational Safety and Health, 6–21–1 Nagao, Tama-Ku, Kawasaki 214-8585, Japan Received April 22, 2007 and accepted August 8, 2007

Abstract: For better or for worse, the advent of journal impact factors last century marked a key turning point in the global development of scientific publication and referencing systems. Since that time however, the concept has attracted considerable attention from a variety of sources, and its usefulness for relatively small research fields such as occupational health, has also been debated. For these reasons, the current paper provides a descriptive history of the journal impact factor and a discussion of its relevance for occupational health. Developmental milestones, inherent shortcomings and future challenges are also described, along with techniques used for increasing the impact factor and some potential strategies for improvement of the citation indexing system. While many scholars now question its increasingly prominent role in the evaluation of scientific research, the journal impact factor continues to form an important component in the dissemination and retrieval of scientific literature in the occupational health field, as elsewhere. Due to the controversy incurred since its inception however, and the increasingly diverse manner in which it is now being used, it remains to be seen what the next 50 years of journal impact factors will bring. Key words: Impact Factor, Citation Index, Occupational Health, Journal Publishing, Citation Classics

Introduction It has been suggested that the ideal goal of science is to record and share all useful research findings with others1). From this, the primary mission of a scientific journal can be seen as one of communication, via the documentation of scientific advances, as well as communicating the ethos of a particular field2). In specialty areas such as occupational health, professional journals also exist to serve the needs of the practicing occupational physician3). This may be in the guise of words on paper and a record of changing concerns, as well as a vehicle for the aspirations of society members4). Topics published in specialty journals tend to reflect the major research areas and clinical challenges of that particular domain5), and like all academic disciplines, the field of occupational health is constantly changing. Epidemiology for example, has risen to become an increasingly important component of work-related research in recent years6). To keep abreast of these changes, journal editors should consistently strive to publish valid, new and important

papers7). Regardless of the topic published in a journal, the problem with medical literature is that one must always keep up to date with contemporary developments in the field. While it is no doubt essential for the scientist and clinician to follow the latest research findings, one can never hope to catch up by simply reading more articles8). In the occupational health field, as elsewhere, researchers, clinicians and librarians have always been faced with the dilemma of what periodicals are the most appropriate for their needs9), given that there are literally millions of articles on library shelves. To help researchers, librarians and other academics choose appropriate material for critical reading, the necessity of a simple measure for comparing and evaluating journals became increasingly apparent during the 20th century. For many scholars, it had also become clear over time that a relatively small group of core scientific journals were responsible for publishing the majority of important articles10). In order to facilitate the dissemination and retrieval of scientific literature therefore, the idea of ranking journals with an “impact

IMPACT FACTORS AND OCCUPATIONAL HEALTH factor” emerged last century11), and one which has since become the subject of widespread controversy12). Publication within prestigious journals for example, is now being regularly used by funding agencies as a marker for assessing the performance of researchers and their departments13). As a result, scientists are becoming increasingly desperate to publish their findings in a few select periodicals14). Although impact factors now represent a major consideration for the scientific community, the basic concept did not catch on immediately. Indeed, it has been noted that as recently as fifteen years ago, many editors and academics had still not even heard of it15). In the competitive environment of modern scientific research, academia and publishing however, virtually nothing is so eagerly anticipated as the release of each year’s journal impact factors by the Institute for Scientific Information (ISI) in Philadelphia16), now known as Thomson Scientific. Due to its marketing value and attractiveness for potential authors and advertisers, most journals advertise a current impact factor on their website, with some also including a historical record of trends from the past few years. Despite this fact, the mystery and complexity behind each year’s impact factor calculations are frequently misunderstood by many scholars, while its usefulness for small research fields such as occupational health, has also been debated5, 17, 18). For these reasons, the current paper provides an overview of journal impact factors for the occupational health professional, including pivotal moments in its historical development, scientific milestones in global research and journal publishing, as well as a summary and discussion of its inherent strengths and limitations. Techniques used for increasing the impact factor are proposed, along with some potential strategies for improvement of the citation indexing system. Its overall relevance and impact for the occupational health field is also described and debated, expanding a previous editorial on the topic3).

Historical Background Although the concept of an impact factor was first proposed in 195519), its origins can be traced much further back. In 1927 for example, Gross & Gross20) of Pomona College in the United States (US) first suggested counting references as a means to rank the use of scientific journals21). In 1934, S.C. Bradford22), head of the Science Museum Library in London, described how scientific articles on a given topic were being unevenly distributed across the journal literature. Interestingly, since 1873, the US legal profession had benefited from a research tool conceptually similar to the impact factor, known as Shepard’s Citations23). Named after its founder, the

731 Frank Shepard Company of Colorado, Shepard’s Citations was a list of American court cases and judgments, with the complete history of each being recorded in a simple code. Under every listing was a record of other court cases that had since referred to it, subsequent judgments that had affected it, as well as anything else of potential value to the lawyer24). As most US legal decisions are based on precedent, such a system had evolved because there was always a need to cite earlier cases, while simultaneously ensuring that a previous judgment had not since been reversed, overruled or otherwise distinguished in some way that made it invalid23). Aside from including many law reviews and law journals, Shepard’s Citations also listed some specialty publications such as the Journal of the Patent Office Society23), which would later spark the interests of a young chemistry student named Eugene Garfield. Garfield was raised during the Great Depression, the son of a successful newspaper magazine distributor who ran a firm known as the Garfield News Company25). After completing high school and commencing a chemistry degree, Garfield served with the US ski troops during World War Two, before completing his studies at Columbia University and obtaining a job at the same institute in the post war years26). Reportedly, one of Garfield’s first inspirations came after reading a 1945 article by Vannevar Bush27) where the idea of making previously collected information more accessible and recording people’s information trails, was first proposed28). While attending a meeting of the American Chemical Society in 1951, Garfield heard of a job opening at the Welch Medical Indexing Project at Johns Hopkins University, which he subsequently obtained in the same year29). The Welch project itself was a venture funded by the US Army Medical Library to examine systems for medical information retrieval and new methods of indexing the biomedical literature, which would later evolve into the Index Medicus. While working for the project, Garfield became particularly interested in using machines to help generate indexing terms that would describe a document’s contents without the need for human intervention30). Interest in the concept slowly grew, and in 1953, he obtained some major press coverage after organizing the first symposium on machine methods in scientific documentation at Johns Hopkins University26). Upon reading newspaper reports about the Welch project, a retired Vice President from Shepard’s Citations named William Adair, contacted Garfield to see if their indexing system might be applied to the sciences. In what would later be described as a “eureka moment”, Garfield visited a public library in Baltimore to see Shepard’s Citations for himself, confirming that it was indeed, wellsuited to such a role31). In order to bestow appropriate

732

DR SMITH Table 1.

Important Dates in the Historical Development of the Impact Factor

2005

Approximately 550 million citations are contained in the SCI database37)

1999 1998 1997 1972 1965 1964 1963 1961 1958 1955 1951 1945 1934 1927 1873

The concept of “topic-based” impact factors for occupational health is proposed17) 40th anniversary of the ISI, which is now covering over 8,000 titles35) The first journal is accused of manipulating its impact factor80) It is estimated that one million scientists are accessing the ISI database Price33) publishes his article about the network properties of scientific papers The ISI now covers at least 600 journals in its databases The term Impact factor is first used in the inaugural Science Citation Index (SCI®) The SCI’s precursor, the Genetics Citation Index, is founded Garfield borrows $500 and founds the Institute for Scientific Information (ISI®) Garfield19) publishes his idea for a citation index in Science Eugene Garfield joins the Welch Medical Indexing Project at Johns Hopkins Bush27) publishes his article on recording people’s information trails Bradford22) publishes his article on the distribution of scientific manuscripts Gross & Gross20) publish their article on libraries and chemical education Shepard’s Citations is first used by the US legal profession

credit for making the link however, Garfield encouraged Adair to write an article about Shepard’s Citations, which was subsequently published in 195523). Realizing the potential of combining these two ideas, Garfield quit his job with the indexing project and undertook a Master’s degree in library sciences at Columbia University to further develop professional skills in the area. One year after graduation, Garfield published his landmark 1955 article in Science19), where it was first proposed that counting references could help measure, what he termed was, the “impact” of a particular journal. Sensing potential with the idea, Garfield borrowed $500 from a loan company in 1958 and founded the Institute for Scientific Information (ISI) in a converted chicken shed on the outskirts of Woodbury, New Jersey32). In 1961, Garfield and his colleague Irving Sher received a grant from the US National Institutes of Health to produce an experimental Genetics Citation Index, which would later lead to the Science Citation Index (SCI®). The term “impact factor” was first used in 1963, when the inaugural (1961) SCI was published by Garfield’s newly formed ISI company21), although it would take some years before the SCI actually made a profit29). In 1965, Price33) published his classic article on the network properties of scientific papers, and by 1967, Garfield had noted that as the field of science grew, its commitment to the handling of scientific information must also increase34). Journal Citation Reports (JCR®) were subsequently launched as a byproduct of the SCI, and between 1975 and 1989, appeared as a supplementary volume in the annual SCI21). Garfield’s invention started with a listing of 200 journals in roughly 32 pages per issue in 195835), growing to 600 journals in 1964 and 2,400 journals by 197236). By 1972, approximately 1 mil-

lion scientists were accessing the ISI database worldwide, and by the SCI’s 40th anniversary in 1998, over 8,000 titles were being listed across 35 languages35). In 2005 it was estimated that the SCI database contained 550 million citations37), and in 2006, the JCR was including around 15 million citations from approximately one million source items per year38). Key historical milestones in the development of the journal impact factor are displayed in Table 1.

Calculation Methods Journal impact factors have always been calculated via a relatively simple formula, which has been described in detail elsewhere38). Briefly, the Institute for Scientific Information in Philadelphia (now known as Thomson Scientific) maintains an ongoing database that continuously encodes all references from the reference lists of papers published in ISI-listed journals39). This ordered list of cited articles is accompanied by a list of citing articles, and is known as the citation index40). The citation index is then used to calculate a journal’s impact factor, using two basic components: the numerator and the denominator41). The journal’s numerator contains the number of times in the current year that any article published by that same journal in the previous two-year period, was cited42). The denominator contains the total number of substantive articles (usually original articles and reviews) published by that journal during the same twoyear period43). The numerator is then divided by the denominator to obtain the journal’s impact factor for the following year44). In essence therefore, the impact factor of a journal becomes the number of times that its articles from a given

Industrial Health 2007, 45, 730–742

733

IMPACT FACTORS AND OCCUPATIONAL HEALTH time period were cited, divided by the number of times they could have been cited during that same time45). Although a two-year period for counting citations has always been used, according to Garfield38), the impact factor could just as easily be calculated with a single year’s citations alone. Nevertheless, the problem has always been to find a balance between having a more current index (by including only the previous year’s citations for example), weighed against the possibility of missing journal articles which naturally take longer to be cited. During their initial formulation of the database in the late 1950s, Garfield and Sher had found that 25% of all citations in the current year were to articles that were only two or three years old, prompting them to choose two years as the cutoff point, which is still used today11). A two year citation counting period may not appropriate for all disciplines however, particularly in the field of occupational health, as will be described later in this article.

Inherent Limitations While being a relatively simple concept, the impact factor calculation incorporates certain inherent limitations, which have been described by various authors14, 16, 46–51), and are summarized in Table 2. Chief among them, and despite Garfield’s entreaties, the impact factor is now being increasingly used for the ranking of both scientific journals and the scientists who publish in them47). In doing so, it has become an equally controversial technique when used during the hiring of researchers and academics, particularly in countries where such appointments are heavily influenced by the applicant’s total score of impact factors. In some Italian research positions for example, candidates are required to add the total impact factors for journals in which they have published in the last five years and then calculate a weighted average score52). Italian professorial appointments therefore represent a topic for debate53), while similar situations have also been described in Germany54), Japan51) and Spain51). When used for this purpose the impact factor has incurred a variety of criticisms from the scientific community47, 52), and it has been suggested that more reliable evaluation measures need to be found55). While exact comments and criticisms of the journal impact factor may vary, there are a few intrinsic weaknesses of the system which many academics agree on. Porta and colleagues56) for example, have suggested two conceptual reasons as to why the bibliographic impact factor might not be the ‘scientometric’ indicator of choice. Firstly, they point out that impact factors will always be heavily influenced by the total number of citable articles comprising the denominator upon which the calculation is based. Secondly, the authors note that biblio-

Table 2.

Some Limitations of the Impact Factor

A journal’s score is heavily influenced by its total number of citable articles The calculation method is based on a highly skewed distribution Relatively few journals continue to receive the bulk of all citations Relatively few articles in any journal are responsible for most of its citations The two year citation counting process favors “fast moving” disciplines The impact factor number does not take into account the context of the article

graphic impact factors utilize a poor average value calculated from a highly skewed distribution. On a practical level, this means that a select few journals continue to receive the bulk of all article citations. Even among high ranking journals, most citations are accumulated by only a few articles, whereas low impact factor journals will tend to publish many papers that remain uncited57). There is also the issue of how fast-paced a particular discipline actually is58). Porta and colleagues59) for example, have described how the current two-year citation period tends to favor “fast moving” and “basic” biomedical disciplines, a concept which may be inappropriate and unfavorable for research fields with relatively longer publication lag times. In this manner, such a system inherently disadvantages research from the social sciences, humanities and nursing for example, where publishing cycles tend to be slower when compared to work from the “bench-lab” science journals60). Another problem lies in the interpretation of citation counts themselves, given the fact that the number does not always take into account the context of the citation61). An erroneous or retracted article for example, may continue to be cited as valid work, simply because it contains incorrect or obsolete material62). Furthermore, two papers give double the citation count of one, even if the second article serves only to correct the original14). Despite these shortfalls, the ability of an impact factor to identify important publications was recognized early on, and the distribution of citation counts in the scientific literature had always been fairly skewed. In 1996 for example21), Garfield estimated that of 3,000 biomedicalrelated journals, around 500 accounted for half of what is published and about 75% of what is actually cited. Even the prestigious journal Nature was not immune to such skewed distributions, with its editors reporting that 89% of their citations during 2004 were generated by only 25% of their papers44). An earlier analysis of Nature’s citation rates conducted by Garfield12), also found that while 99.7% of papers published in 1990 were cited within a one-year period, this figure had fallen to 67.7% by 1998.

734 Bias during cross referencing between the disciplines may also be problematic. McCunney and Harzbecker5) for example, reported that general medical journals (which tend to have inherently high impact factors), only rarely cited articles from specialist occupational medicine journals. But whether this issue represents a limitation of the impact factor itself or simply an artifact of scientific publication is uncertain, because the actual number of papers published in a journal should not directly influence the calculation. As Garfield pointed out in 1997 for example63), a large journal with 1,000 articles and 30,000 cited references per year, would have the same impact factor as a smaller journal containing only 100 articles and receiving 3,000 citations. In the same manner, famous articles that are constantly cited over a long time period, the so-called citation classics, should also have little effect on a particular journal’s impact factor after the twoyear citation counting phase is over. Given these considerations, the contribution of super-cited papers to both the academic world and the global scientific community is worthy of some further discussion.

Citation Classics According to Citation Classics64), a 1951 article by Oliver Lowry in the Journal of Biological Chemistry (JBC)65) remains the most cited paper in the history of science. Despite having received over 400,000 citations from all articles by 2004, the journal it which it originally appeared (the JBC), did not and does not, have the world’s highest impact factor66). Furthermore, extremely high citation counts may not always equal scientific excellence, even in the mind of the author. Although it was cited over 8,000 times in 1994 alone21), 275,669 times by January 200467), and 293,328 times by July 200566); Lowry once reflected that he was not a genius68) and that the 1951 article was by no means his best work64). Even Eugene Garfield found that his most famous 1955 paper in Science19), was not necessarily his most highly cited24). As the impact factor is fundamentally based on citation counts alone, the global scientific importance of supercited papers can not be automatically assumed, and thus, remains somewhat contentious. One major confounder stems from the fact that certain articles with enormous scientific and humanitarian impact have not automatically risen to become super-cited papers, nor have the journals in which they were originally published gone on to receive the highest impact factors. Despite now saving over 300,000 lives per year for example69), the discovery of a live polio vaccine and its subsequent landmark article by Sabin and colleagues in 196070), had received just over 90 citations by 198771). Similarly, Albert Einstein had only been cited 11,920

DR SMITH times by 200172). Whatever the end result of citation counting and related impact factor calculations, it is clear that most scientific authors and their articles will probably never become super-cited. Of the 38 million citable items published between 1900 and 2005 for ex ample, less than 1% were cited more than 200 times and half were not cited at all38). Furthermore, Lowry’s classic paper from 195165), was not the highest cited article in either the past two years or the past ten years73). The key determinant of a journal’s impact factor therefore, is not so much the actual number of articles they publish, but rather, their citation density and the age of the cited literature38). As such, a journal’s impact factor will tend to be driven by a core group of articles that are highlycited in the first few years after publication, rather than its collection of super-cited papers. On the other hand, the size of a particular research field will generally determine the number of super-cited papers it ever contains74). In small disciplines such as occupational health, the number of super-cited papers will not only be limited, but the absolute number of citations each paper ever receives will also be small when compared to other fields. A recent investigation of this topic for example75), found that the top cited paper in occupational medicine76) had received fewer than 1,000 citations, and only three articles had been cited more than 300 times. Similarly, the most frequently cited paper in INDUSTRIAL HEALTH77), was recently shown to have obtained less than 100 citations, overall78).

Increasing the Impact Factor Given the modern obsession with impact factors and their potential influence on advertising, subscription rates and author attractiveness, it is not surprising that editors have often sought ways to manipulate their own journal’s score. There are a few common methods that are believed to be helpful in this regard79), as summarized in Table 3. Firstly, a journal can encourage its authors, either directly or indirectly, to cite other articles previously published in the same journal. Such behavior appears to have been first reported in the late 1990s80), although the rate at which it is attempted often varies between authors, research fields and between journals of the same specialTable 3.

Some Techniques Used to Increase the Impact Factor

Cite articles previously published in the same journal Publish a greater proportion of review articles Focus on research areas which naturally generate more citations Eliminate topics and categories that generate few citations Publish controversial articles, popular themes or “hot topics” Publish the journal’s contents online, for free

Industrial Health 2007, 45, 730–742

IMPACT FACTORS AND OCCUPATIONAL HEALTH ist field81). In occupational health for example, it was previously demonstrated that a journal with the highest rate of self-citation, practiced such a technique approximately three times more often that the lowest “selfciter”5). Regardless of how regularly a particular journal or its authors may actually attempt it, blatant editorial encouragement of self-citation has generally been discouraged by the academic and scientific community82–85). As for its effectiveness, while some investigations have found that self-citation may have a measurable positive effect86), the exact usefulness of self-citing is remains unclear. A recent study by Thomson Scientific for example, found that there was only a weak correlation between self-citation, the impact and subject of a journal87). Another technique often used for increasing the impact factor, is the review article. In this strategy, a journal can choose to publish a higher proportion of literature reviews compared to original research articles, as reviews will tend to generate a higher number of citations from other authors52). Although review-only journals often have reasonably high impact factors, at least 40,000 review articles are published in the world each year, and not all of them will achieve high impact21); suggesting that it is not a foolproof methodology. Thirdly, a journal may choose to focus on research topics that naturally generate a high number of citations, such as molecular biology, and eliminate research topics that do not52). Fourthly, journals may choose to publish articles that are particularly controversial or deal with inherently controversial issues21). Controversy generates interest among readers, who may then be inclined to cite the article, regardless of its actual scientific or academic value. In this manner, it has been shown that even erroneous articles may continue to receive citations after being retracted62). The publication of a journal’s entire contents online, for free, represents a controversial fifth strategy, and one that may or may not influence the impact factor. On one hand, it has been shown that sharing detailed research data may be associated with an increased citation rate in some journals88, 89) The online presence of tables of contents, article abstracts and full-text may also result in a higher impact factor, independent of time and subject category90). On the other hand however, a study by Thomson Scientific91) found no clear effect on impact factors when the published material was open access. Regardless of whether free content increases a journal’s impact factor or not, open access to scientific literature online no doubt offers substantial benefits for the scientific community. There are a few reasons for this. Firstly, free online access facilitates the dissemination of scientific information via email links, discussion groups and other formats88). Secondly, free access helps address an important equity issue for countries that cannot afford

735 expensive journal subscriptions, but would nevertheless benefit from the information they contain. For developing countries in particular, the internet provides many scientists’ only chance of accessing literature that would otherwise be too expensive to purchase92). Even so, free access is not always a panacea, as technical access to scientific information remains a serious problem for scholars in developing countries such as Africa, due to computer scarcity, limited bandwidth and difficulty in accessing the internet93). Indeed, the need for greater exploitation of the internet has been previously suggested by Seringhaus and Gerstein1), as a future technique to help truly integrate scientific information on a global scale. Increasing the availability of occupational health research online for free therefore, represents a topic that may need to be addressed in future debates. Irrespective of what strategies actually work in raising a journal’s impact factor, it is important not to lose sight of the primary and fundamental objective in science to record and share important information with others1). While striving for the best possible impact factor is understandable in today’s competitive publishing environment, an obsession with this goal may invariably become counterproductive. In the worse case scenario, an “impactfactor-centered” approach may simply lead to a situation where only ‘citable’ articles are published by scientific journals, often at the expense of more readable and entertaining material15). On the other hand, it should also be remembered that publication of one’s research in low impact factor journals is not necessarily a disaster, as useful material is not only limited to the high–impact periodicals. In a study of guidelines for evidence-based practice for example, Nakayama and colleagues94) found that journals with low impact factors were still frequently cited as providing important evidence. Perhaps the final word on how to improve a journal’s impact factor is already self-evident. If, as Garfield previously stated, that a journal’s impact is simply a measure of its ability to attract the best papers available21), then it would appear that the ideal way to increase one’s impact factor is simply to attract and publish better material. Similarly, it is also worth considering the global expansion of information technology and the power of the internet. In this regard, impact factor scores might naturally increase over time46, 95), as technology makes it easier for authors to access a wider variety of reference material and computing software makes it easier for them to cite more references, per article. Indeed, such a revolution may already be underway. Chew and colleagues for example95), recently looked at trends for seven major medical journals and found that most impact factors had risen over the previous 12 yr. This may reflect changes occurring at a broader level as well. In 1974 for example, it had

736

DR SMITH

been noted that only about 150 journals (approximately 6.3% of the total) actually had impact factors greater than three10). By 2005, this figure had risen to approximately 14% (author’s calculation).

Impact Factors and Occupational Health Impact factor variations between different research fields have always depended on the interplay of several items96), to which occupational health is no exception. As this article has already described, there has been an increasing level of debate regarding the overall usefulness and relevance of impact factors for occupational health. Such topics have also been discussed in various journal articles, editorials and letters2–5, 17, 18, 97), the main issues of which are summarized in Table 4. While many inherent shortcomings are equally relevant to occupational health journals as any other discipline, there are a few additional, and field-specific, limitations worth discussing. Firstly, there is the issue of relative fairness when using a two-year citation counting period, as the contribution of research findings in the field of workplace health, often remains unappreciated for some time. This is particularly important because many workplace diseases tend to have a long lag time, meaning that fundamental research is rarely cited in the first few years after publication.

Table 4. Some Considerations for the Field of Occupational Health The research and practical field of occupational health itself is rather small The importance of many workplace disease studies are not immediately apparent Citations in this field generally have a longer lag time than two years The number of potential readers, authors and article “citers” is inherently limited The highest impact factor journals rarely have a score greater than two

Table 5.

Similarly, the impact of small studies and individual case reports in occupational health may also pass unnoticed for some time. Although Australia has one of the world’s highest mesothelioma incidence rates for example98), the first local case report describing occupational asbestos exposure and mesothelioma in 196299) was largely ignored. While it is unclear exactly how long the citation counting period for occupational health periodicals might be extended to, it appears that our field would certainly benefit from some kind of extension18). At a broader level, the two-year citation counting period may need to be adjusted not only for this discipline, but also for larger and more general fields with relatively longer lagtimes in their research, such as public health. Another key limitation is the fact that impact factors for specialist occupational health journals have historically been, and continue to be, quite low. While it is well-known that impact factors of most periodicals in this field rarely exceed two17), certain prestigious clinical journals may have impact factor scores twenty times higher than that18). Such phenomenon was graphically illustrated by Uehara et al.18), who listed the impact factors of some “typical” occupational health journals at the time, finding that the highest score was only 1.508. A similar situation is also evident when using contemporary data adapted from the 2006 Journal Citation Reports ®. More recent JCR®impact factors for Uehara et al.’s18) typical occupational health journals are shown in Table 5, albeit with the inclusion of some additional publications, which are also considered important to the field. Similar to the 2003 study18), impact factors for dedicated occupational health journals in 2006 (the latest available data) rarely exceed two. Aside from the longer citation lag times previously mentioned, these comparatively lower impact factors may reflect a limited number of academics in the field, different publishing customs and the intrinsic decision making processes across different disciplines17). Relative prestige when trying to build a scientific career is also important, as research suggests that the strongest

Selected Occupational Health Journals and their Current Impact Factors

Journal Title

Impact Factor*

Occupational and Environmental Medicine

2.255

Journal of Occupational and Environmental Medicine Journal of Occupational Health

1.942 1.848

Scandinavian Journal of Work, Environment and Health International Archives of Occupational and Environmental Health American Journal of Industrial Medicine Industrial Health Occupational Medicine—Oxford

1.735 1.520 1.433 0.911 0.812

* Current journal impact factors as published in the Thomson Scientific 2006 Journal Citation Reports ®.

Industrial Health 2007, 45, 730–742

737

IMPACT FACTORS AND OCCUPATIONAL HEALTH predictor of yearly citations for a given article is the impact factor of the journal in which it was originally published100). Authors whose work appears in dedicated occupational health journals therefore, may be destined for few citations, and to aid their career, these researchers may subsequently opt to publish the bulk of their work in general medical journals with higher impact factors. Occupational health periodicals may then be relegated to publishing less important findings, which were deemed unsuitable by the authors as potential “crossover” articles for the general medical literature. To help compensate for the relatively lower impact factors generally observed in occupational health, Takahashi et al.17) first proposed the concept of “topic-based” impact factors in 1999. This technique, they believed, would help alleviate problems associated with different publication customs, and also help adjust for the relative number of academics working in a specific topic area. In their concept, demonstrated more clearly in a later journal article18), topic-based impact factors were obtained by dividing the number of citations received by an article during a specific time period by the number of articles in the topic group to which it belongs17). This method simply replaces the group of articles published in a particular journal with the group of articles published on a particular topic18). Garfield himself took some interest in their approach97), pointing out however, that the Medline system and the Medical Subject Headings (MeSH) on which Takahashi et al.’s17) proposal is based, may suffer from inconsistencies between indexers applying MeSH and changing terminologies. Despite this fact, the concept of topic-based impact factors no doubt offers potential benefits for relatively small research fields, such as occupational health, particularly due to its usefulness as a reference value for the citation counts of individual papers18). As such, it remains to be seen if the concept develops greater momentum in the 21st century. The issue of how to select appropriate publications in the field of occupational health has also been discussed from a wider perspective than impact factors alone. In a study of occupational-health-specific publications for example, Gehanno and Thirion9) reported that only 1.4% of the journals they retrieved accounted for more than 25% of the total articles published. More tellingly, only 66% of these retrieved journals actually had an impact factor. Clearly therefore, using the impact factor alone as a tool for identifying important manuscripts in the field of occupational health, would appear to be problematic. The issue of suboptimal reference accuracy in the occupation health literature has recently been highlighted101), while an additional conundrum is also evident in the major overlap between specific occupational health articles and basic medical sciences. In an earlier study,

McCunney and Harzbecker5) for example, found that occupational medicine journals were 50 times more likely to cite general medical literature, than vice versa. This clearly suggests that while general medical fields may attract the attention of occupational medicine, the reverse is seldom true. While it is difficult to see how such a situation may be alleviated in future, an examination of general future challenges for the impact factor offers some insight regarding the way forward.

Future Challenges From a conceptual perspective, the journal impact factor is but one of many challenges faced by the contemporary scientific researcher. From a global perspective, a key issue to be addressed in future is that of research equity. Similarly, the entire nature of scientific research and development itself is constantly changing, and with it, the world of journal publication. Stembridge102) for example, has noted that there is a current drift in the globalization of international research and development, with China proving to be particularly attractive in this regard due to low costs and a large increase in the universityeducated labor force. The number of researchers employed in China is now close to one million, making it the second largest workforce of this type in the world, behind the US102). For these reasons and more, it is predicted that China will grow in the coming years to become the second largest spender on research and development, ahead of Japan and behind the US. The emergence of China as a major contributor to global research, and the increasing importance of Chinese-language journals, will undoubtedly have a major influence on international scientific research, publishing and impact factors in the new millennium. In this sense, country of origin represents an important issue that will also need to be addressed in future debates. While biomedical publication is known to be biased towards countries with higher economic rankings103), not all developed nations have the same research output. Based on their analysis of data from 1995–2003 for example, Falagas and colleagues104) suggested that Canada currently has the highest research productivity, after adjusting for GDP. While Japan’s share of original articles for basic medical science increased between 1991 and 2000, its proportion of review articles in the same time period has remained relatively low105). From a global perspective, it has been noted that gross national product as well as research and development expenditure are probably the most important issues to address when considering biomedical research productivity106). Other worldwide influences are also relevant for the evaluation of scientific output.

738 It has been suggested for example, that the impact factor continues to incur a strong bias against culture and language-bound specialties107), the coverage of journals from non-English speaking countries in the citation index remains suboptimal108) and that there may be an over-representation of US citations in the literature109). This is unfortunate, as the contribution of scholars from countries where English is not the major language no doubt occupies a key position in the global body of scientific knowledge. Even so, foreign authors are not necessarily being ignored by citation indexing systems, and Garfield has pointed out that non-English language journals will still be cited in the SCI. German scientists for example, published almost 80,000 papers in 1997 that were covered by the SCI, about 12,000 of which were published in German language110). On the other hand, most contemporary scientists now recognize the importance of English-language publication of their work111). While numerous criticisms have often been leveled at the journal impact factor and its appropriateness for evaluating scientific research, improving the current system may also be equally complicated. Garfield himself originally stated that the ultimate success of a citation index would depend on many factors112). Indeed, as researchers, we may never be able to escape such a concept. While creating an alternative may be desirable, it will always be possible to calculate impact factors as long as scientists keep publishing articles with reference lists113). Given this situation, perhaps the most useful improvement for the impact factor would be to cease using it as the sole evaluation criteria of research quality. While evaluating scientific quality has always been a notoriously difficult issue with no standard solution114), perhaps the main criticism historically leveled against the impact factor is that a single number should not be expected to reliably measure the scientific credibility of individual journal articles39). In a proposal to address this situation, Zwahlen and colleagues115), have suggested that a wider discussion of the criteria and procedures used for evaluating scientific research is now needed. Regarding author contribution, Motta116) has pointed out that while dividing an article’s impact factor by the number of authors may appear to be a simple solution, it may not necessarily be an appropriate strategy. Similarly, Garfield has noted that although the issue of journal authorship no doubt represents a key factor in a scientist’s career development, first authorship is not always essential for one to be fully credited in the citation system117). While the “H-Index”, developed by Jorge Hirsch from the University of California, provides an expression of an individual scientist’s output and quality37), establishing the quality of each individual paper would seem to be a more useful measure in deciding

DR SMITH which research findings are useful or not. Another recent proposal is the “Strike Rate Index”, proposed by Barendse118). Similarly, the evaluation of individual journals in occupational health, as elsewhere, may also be improved by using a combination of existing scales. The Immediacy Index for example, is a common measure that describes the average number of times that an article published in a specific journal within a set year, is cited during that same year119). This provides a measure of the relative speed at which citations to a specific journal are appearing in the literature. Another determinate is the journal’s Cited Half Life (CHL), a measure of the number of years, going back from the current year, that accounts for 50% of the citations given by the citing journal in the current year74, 119). Using such measures to help normalize the impact factor score represents just some of the many important issues that will need to be further discussed and considered by the scientific community in future years.

Conclusion Despite some inherent limitations, the advent of an impact factor last century no doubt marked a turning point in the global development of scientific publication and referencing systems. Its invention has often been compared to nuclear energy, given that both represent a mixed blessing which can be used constructively by some, and just as easily abused by others66). While many scholars have denounced its contemporary role in the evaluation of scientific research, the concept of an impact factor was probably still ahead of its time, and has no doubt altered scientific publishing over the past fifty years. When considering its relationship with our field, it is worth remembering that despite being a poorly cited discipline, occupational health research still contributes major findings to the field of global human health. Some inspiration may be found in the discipline of taxonomy, which although occupies a poorly-cited research field120) where impact factors may be irrelevant121); nevertheless remains a proud group with its own distinct citation classics122). Forensic science and legal medicine represents another research field that often publishes topics of great societal importance, despite having few journals with impact factors exceeding two123). Regardless of one’s ‘citationist’ perspective, many challenges continue to exist for occupational health as a distinct medical specialty124). Perhaps we should, as Frumkin125) suggests, seek to “reinvent” rather than “lament”. From a publishing perspective, changes may already be underway. In a recent Medline search for example126), it was demonstrated that the number of journal articles coded for the word “occupational”, increased

Industrial Health 2007, 45, 730–742

IMPACT FACTORS AND OCCUPATIONAL HEALTH from 24,062 in the 1970s, to 36,249 in the 1980s and then to 53,388 in the 1990s. Whether these gains will translate into a brighter future for the impact factor in occupational health journals, however, remains to be seen. Despite the criticisms raised during this article, the concept of an impact factor and its relevance for occupational health is not necessarily a flawed one. Indeed, half a century after his original article was first written, Porta and colleagues56) noted that in developing the impact factor, Garfield displayed ‘amazing vision, imagination, creativity, and ambition’ (p.1133). Due to the controversy incurred since its inception however, and the increasingly diverse manner in which it is now being used, it remains to be seen what the next 50 years of impact factors will bring for occupational health and other disciplines.

References 1) Seringhaus MR, Gerstein MB (2007) Publishing perishing? Towards tomorrow’s information architecture. BMC Bioinformatics 8, 17. 2) Brandt-Rauf P (2000) Congratulations from the editor of Journal of Occupational and Environmental Medicine. Occup Med (Lond) 50, 456–7. 3) Sawada S, Smith DR, Araki S (2007) The impact factor and INDUSTRIAL HEALTH. Ind Health 45, 501–2. 4) Carter T (2000) The three faces of Occupational Medicine: printed paper, problems in practice, and professional purpose. Occup Med (Lond) 50, 460–70. 5) McCunney RJ, Harzbecker J (1992) The influence of occupational medicine on general medicine: a look at the journals. J Occup Med 34, 279–86. 6) Takahashi K, Hoshuyama T, Ikegami K, Itoh T, Higashi T, Okubo T (1996) A bibliometric study of the trend in articles related to epidemiology published in occupational health journals. Occup Environ Med 53, 433–8. 7) Sevinc A (2004) Manipulating impact factor: an unethical issue or an Editor’s choice? Swiss Med Wkly 134, 410. 8) Garfield E (1982) More on scientific journals. N Engl J Med 307, 506. 9) Gehanno JF, Thirion B (2000) How to select publications on occupational health: the usefulness of Medline and the impact factor. Occup Environ Med 57, 706–9. 10) Berry EM (1981) Sounding Board. The evolution of scientific and medical journals. N Engl J Med 305, 400–2. 11) Garfield E (2007) The evolution of the Science Citation Index. Int Microbiol 10, 65–9. 12) Garfield E (2000) Use of Journal Citation Reports and Journal Performance Indicators in measuring short and long term journal impact. Croat Med J 41, 368–74. 13) Ball P (2006) Prestige is factored into journal ratings. Nature 439, 770–1. 14) Lawrence PA (2003) The politics of publication. Nature 422, 259–61.

739 15) Smith R (2006) Commentary: the power of the unrelenting impact factor—is it a force for good or harm? Int J Epidemiol 35, 1129–30. 16) Walter G, Bloch S, Hunt G, Fisher K (2003) Counting on citations: a flawed way to measure quality. Med J Aust 178, 280–1. 17) Takahashi K, Aw TC, Koh D (1999) An alternative to journal-based impact factors. Occup Med (Lond) 49, 57–9. 18) Uehara M, Takahashi K, Hoshuyama T, Tanaka C (2003) A proposal for topic-based impact factors and their application to occupational health literature. J Occup Health 45, 248–53. 19) Garfield E (1955) Citation indexes for science; a new dimension in documentation through association of ideas. Science 122, 108–11. 20) Gross PLK, Gross EM (1927) College libraries and chemical education. Science 66, 385–9. 21) Garfield E (1996) How can impact factors be improved? BMJ 313, 411–3. 22) Bradford SC (1934) Sources of information on specific subjects. Engineering 137, 85–6. 23) Adair WC (1955) Citation indexes for scientific literature? Am Document 6, 31–2. 24) Garfield E (2006) Commentary: fifty years of citation indexing. Int J Epidemiol 35, 1127–8. 25) Bensman SJ (2007) Garfield and the Impact Factor. Ann Rev Inform Sci Technology 41, 93–155. 26) Broad WJ (1978) Librarian turned entrepreneur makes millions off mere footnotes. Science 202, 853–7. 27) Bush V (1945) As we may think. Atlantic Monthly 176, 101–8. 28) Anonymous (2007) Interview with Eugene Garfield: a lifetime of achievement and still going strong. Information Today 24, 21. 29) Hopkin K (2005) Most highly cited. The Scientist 19, 22–3. 30) Garfield Library Website. Eugene Garfield Ph.D. Career Overview. http://www.garfield.library.upenn.edu/ overvu. html. Accessed May 24, 2007. 31) Thomson Scientific Website. The eureka moment: The breakthrough that changed indexing forever. http:// thomsonscientific.com/promo/celebration/eureka. Accessed May 24, 2007. 32) Garfield E (1980) How it all began: with a loan from HFC. Essays of an Information Scientist 4, 359–62. 33) Price DJ (1965) Networks of scientific papers. Science 149, 510–5. 34) Garfield E (1967) Information retrieval. Science 156, 1398–401. 35) Thomson Scientific Website. Thomson Scientific: Company timeline. http://scientific.thomson.com/isi/ timeline. Accessed May 24, 2007. 36) Garfield E (1972) Citation analysis as a tool in journal evaluation. Science 178, 471–9. 37) Perkel JM (2005) The future of citation analysis. The Scientist 19, 24–5.

740 38) Garfield E (2006) The history and meaning of the journal impact factor. JAMA 295, 90–3. 39) Ha TC, Tan SB, Soo KC (2006) The journal impact factor: too much of an impact? Ann Acad Med Singapore 35, 911–6. 40) Garfield E (1970) Citation indexing for studying science. Nature 227, 669–71. 41) Garfield E (1999) Journal impact factor: a brief review. CMAJ 161, 979–80. 42) Saha S, Saint S, Christakis DA (2003) Impact factor: a valid measure of journal quality? J Med Libr Assoc 91, 42–6. 43) Dong P, Loh M, Mondry A (2005) The “impact factor” revisited. Biomedical Digital Libraries 2, 7. 44) Anonymous (2005) Not-so-deep impact. Nature 435, 1003–4. 45) Whitehouse GH (2001) Citation rates and impact factors: should they matter? Br J Radiol 74, 1–3. 46) Abbasi K (2007) Why journals can live without impact factor and cluster bombs. J R Soc Med 100, 113. 47) Adam D (2002) The counting house. Nature 415, 726–9. 48) Andersen J, Belmont J, Cho CT (2006) Journal impact factor in the era of expanding literature. J Microbiol Immunol Infect 39, 436–43. 49) Lundberg G (2003) The “omnipotent” Science Citation Index impact factor. Med J Aust 178, 253–4. 50) Moed HF (2002) The impact-factors debate: the ISI’s uses and limits. Nature 415, 731–2. 51) Abbasi K (2004) Let’s dump impact factors. BMJ 329, e1. 52) Anonymous (1998) Citation data: the wrong impact? Nat Neurosci 1, 641-2. 53) Calza L (1995) Italian professorships. Nature 374, 492. 54) Kaltenborn KF (2004) Validity and fairness of the impact factor— a comment on the article by Decker et al. Soz Praventivmed 49, 23–4. 55) Marashi SA (2005) On the identity of “citers”: are papers promptly recognized by other investigators? Med Hypotheses 65, 822. 56) Porta M, Fernandez E, Bolumar F (2006) Commentary: the ‘bibliographic impact factor’ and the still uncharted sociology of epidemiology. Int J Epidemiol 35, 1130–5. 57) Opthof T, Coronel R, Piper HM (2004) Impact factors: no totum pro parte by skewness of citation. Cardiovasc Res 61, 201–3. 58) Della Sala S, Crawford JR (2007) A double dissociation between impact factor and cited half life. Cortex 43, 174–5. 59) Porta M, Copete JL, Fernandez E, Alguacil J, Murillo J (2003) Mixing journal, article, and author citations, and other pitfalls in the bibliographic impact factor. Cad Saude Publica 19, 1847–62. 60) Johnstone MJ (2007) Journal impact factors: implications for the nursing profession. Int Nurs Rev 54, 35–40. 61) Opthof T (1997) Sense and nonsense about the impact factor. Cardiovasc Res 33, 1–7. 62) Budd JM, Sievert ME, Schultz TR (1998) Phenomena

DR SMITH

63) 64) 65)

66)

67)

68) 69)

70)

71) 72) 73) 74)

75)

76)

77)

78)

79) 80) 81)

82)

of retraction: reasons for retraction and citations to the publications. JAMA 280, 296–7. Garfield E (1997) Dispelling a few common myths about journal citation impacts. The Scientist 11, 11–2. Lowry OH, Rosebrough NJ, Farr AL, Randall RJ (1977) Citation Classics. Curr Contents Life Sci 1, 87. Lowry OH, Rosebrough NJ, Farr AL, Randall RJ (1951) Protein measurement with the Folin phenol reagent. J Biol Chem 193, 265–75. Garfield E (2005) The agony and the ecstasy— The history and meaning of the Journal Impact Factor, 1–22, Presented at the International Congress on Peer Review and Biomedical Publication, Chicago. Kresge N, Simoni RD, Hill RL (2005) The most highly cited paper in publishing history: protein determination by Oliver H. Lowry. J Biol Chem 280, e25–8. Lowry OH (1990) How to succeed in research without being a genius. Annu Rev Biochem 59, 1–27. Smith DR, Leggat PA (2005) Pioneering figures in medicine: Albert Bruce Sabin— inventor of the oral polio vaccine. Kurume Med J 52, 111–6. Sabin AB, Ramos-Alvarez M, Alvarez-Amezquita J, Pelon W, Michaels RH, Spigland I, Koch MA, Barnes JM, Rhim JS (1960) Live, orally given poliovirus vaccine. Effects of rapid mass immunization on population under conditions of massive enteric infection with other viruses. JAMA 173, 1521–6. Garfield E (1987) 100 citation classics from the Journal of the American Medical Association. JAMA 257, 52–9. van der Velde G (2001) Taxonomists make a name for themselves. Nature 414, 148. Anonymous (2005) Top 10 papers published. The Scientist 19, 26. Anonymous (2001) Interview with Eugene Garfield, Chairman Emeritus of the Institute for Scientific Information (ISI). Cortex 37, 575–7. Gehanno JF, Takahashi K, Darmoni S, Weber J (2007) Citation classics in occupational medicine journals. Scand J Work Environ Health 33, 245–51. Wagner JC, Sleggs CA, Marchand P (1960) Diffuse pleural mesothelioma and asbestos exposure in the North Western Cape Province. Br J Ind Med 17, 260–71. Kawakami N, Haratani T (1999) Epidemiology of job stress and health in Japan: review of current evidence and future direction. Ind Health 37, 174–86. Smith DR, Sawada S, Araki S (2007) Twenty years of publishing trends and citation indexing at INDUSTRIAL HEALTH, 1987–2006. Ind Health 45, 717–20. Gowrishankar J, Divakar P (1999) Sprucing up one’s impact factor. Nature 401, 321–2. Smith R (1997) Journal accused of manipulating impact factor. BMJ 314, 463. Miguel A, Marti-Bonmati L (2002) Self-citation: comparison between radiologia, European radiology and radiology for 1997-1998. Eur Radiol 12, 248–52. Hemmingsson A (2002) The use of impact factors is

Industrial Health 2007, 45, 730–742

IMPACT FACTORS AND OCCUPATIONAL HEALTH becoming important for scientific journals. Eur Radiol 12, 1863 (author reply 4). 83) Hemmingsson A (2002) Use of impact factors. Clin Radiol 57, 236 (author reply). 84) Hemmingsson A, Edgren J, Mygind T, Skjennald A (2002) Impact factors in scientific journals. J Magn Reson Imaging 15, 619. 85) Hemmingsson A, Edgren J, Mygind T, Skjennald A (2002) Use of impact factors. Lancet 359, 173. 86) Fassoulaki A, Paraskeva A, Papilas K, Karabinis G (2000) Self-citations in six anaesthesia journals and their significance in determining the impact factor. Br J Anaesth 84, 266–9. 87) McVeigh ME (2004) Journal self-citation in the Journal Citation Reports® — Science Edition (2002): A Citation Study from The Thomson Corporation. Thomson Scientific, Philadelphia. 88) Lawrence S (2001) Free online availability substantially increases a paper’s impact. Nature 411, 521. 89) Piwowar HA, Day RS, Fridsma DB (2007) Sharing detailed research data is associated with increased citation rate. PLoS ONE 2, e308. 90) Curti M, Pistotti V, Gabutti G, Klersy C (2001) Impact factor and electronic versions of biomedical scientific journals. Haematologica 86, 1015–20. 91) Testa J, McVeigh ME (2004) The impact of open access journals: a Citation Study from Thomson ISI. Thomson Scientific, Philadelphia. 92) Khan FA (2001) The Net is many people’s only chance of access. Nature 411, 522. 93) Keese P (2001) Even ‘free access’ is still beyond the means of most scholars in Africa. Nature 410, 1021. 94) Nakayama T, Fukui T, Fukuhara S, Tsutani K, Yamazaki S (2003) Comparison between impact factors and citations in evidence-based practice guidelines. JAMA 290, 755–6. 95) Chew M, Villanueva EV, Van Der Weyden MB (2007) Life and times of the impact factor: retrospective analysis of trends for seven medical journals (1994-2005) and their Editors’ views. J R Soc Med 100, 142–50. 96) Garfield E (1976) Significant journals of science. Nature 264, 609–15. 97) Garfield E (1999) Refining the computation of topic based impact factors— some suggestions. Occup Med (Lond) 49, 571–2. 98) Kazan-Allen L (2005) Asbestos and mesothelioma: worldwide trends. Lung Cancer 49(Suppl 1), S3–8. 99) McNulty JC (1962) Malignant pleural mesothelioma in an asbestos worker. Med J Aust 49, 953–4. 100) Callaham M, Wears RL, Weber E (2002) Journal prestige, publication bias, and other characteristics associated with citation of published studies in peer-reviewed journals. JAMA 287, 2847–50. 101) Gehanno JF, Darmoni SJ, Caillard JF (2005) Major inaccuracies in articles citing occupational or environmental medicine papers and their implications. J Med Libr Assoc 93, 118–21.

741 102) Stembridge B (2007) Eastward ho! — The geographic drift of global R&D. Thomson Scientific, Philadelphia. 103) Rahman M, Fukui T (2003) Biomedical publication— global profile and trend. Public Health 117, 274–80. 104) Falagas ME, Michalopoulos AS, Bliziotis IA, Soteriades ES (2006) A bibliometric analysis by geographic area of published research in several biomedical fields, 1995–2003. CMAJ 175, 1389–90. 105) Rahman M, Sakamoto J, Fukui T (2004) Japan’s share of research output in basic medical science. Keio J Med 53, 172–7. 106) Rahman M, Fukui T (2003) Biomedical research productivity: factors across the countries. Int J Technol Assess Health Care 19, 249–52. 107) Brahler E, Beutel M, Decker O (2004) Deep impact— evaluation in the sciences. Soz Praventivmed 49, 10–4. 108) Ren S, Zu G, Wang HF (2002) Statistics hide impact of non-English journals. Nature 415, 732. 109) Moller AP (1990) National citations. Nature 348, 480. 110) Garfield E (1998) The impact factor and its proper application. Unfallchirurg 101, 413–4. 111) Presley RL, Caraway BL (1999) An interview with Eugene Garfield. Serials Review 25, 67–80. 112) Garfield E (2006) Citation indexes for science. A new dimension in documentation through association of ideas. 1955. Int J Epidemiol 35, 1123–7 (discussion 7–8). 113) Garfield E (2001) Impact factors, and why they won’t go away. Nature 411, 522. 114) Seglen PO (1997) Why the impact factor of journals should not be used for evaluating research. BMJ 314, 498–502. 115) Zwahlen M, Junker C, Egger M (2004) The journal impact factor in the evaluation of research quality: villain, scapegoat or innocent bystander? Soz Praventivmed 49, 19–22. 116) Motta G (1995) Journal impact factors. Nature 376, 720. 117) Garfield E (1997) All sorts of authorship. Nature 389, 777. 118) Barendse W (2007) The strike rate index: a new index for journal quality based on journal size and the h-index of citations. Biomed Digit Libr 4, 3. 119) Thomson Scientific Website. Glossary of Thomson Scientific terminology. http://scientific.thomson.com/ suport/patents/patinf/terms. Accessed May 24, 2007. 120) Valdecasas AG, Castroviejo S, Marcus LF (2000) Reliance on the citation index undermines the study of biodiversity. Nature 403, 698. 121) Krell FT (2000) Impact factors aren’t relevant to taxonomy. Nature 405, 507–8. 122) Garfield E (2001) Taxonomy is small, but it has its citation classics. Nature 413, 107. 123) Jones AW (2007) The distribution of forensic journals, reflections on authorship practices, peer-review and role of the impact factor. Forensic Sci Int 165, 115–28. 124) Imbus HR (2004) Fifty years of hope and concern for the future of occupational medicine. J Occup Environ

742 Med 46, 96–103. 125) Frumkin H (2002) Don’t lament, reinvent! The future of occupational medicine. Am J Ind Med 42, 526–8.

DR SMITH 126) Levy BS (2002) The continuing rise of occupational medicine. Am J Prev Med 22, 326–7.

Industrial Health 2007, 45, 730–742