Skip to main content

Diagnostic accuracy of linked administrative data for dementia diagnosis in community-dwelling older men in Australia

Abstract

Background

Routinely collected health administrative data can be used to estimate the prevalence or incidence of dementia at a population level but can be inaccurate. This study aimed to examine the accuracy of hospital and death data for diagnosing dementia compared with a clinical diagnosis in community dwelling older men in Australia.

Methods

We performed a retrospective analysis of the Concord Health and Ageing in Men Project (CHAMP) in Sydney, Australia. Of the 1705 men aged ≥70 years in the CHAMP study, 1400 had available linked administrative data records from 1 year prior to 1 year post the date of clinical dementia diagnosis. The primary outcome was the accuracy of dementia diagnosis using linked administrative data records compared to clinical dementia diagnosis. The linked data diagnosis was based on hospital and death records for the 1 year pre and post the clinical diagnosis. Clinical dementia diagnosis was a two-stage process with initial screening, followed by clinical assessment for those meeting a validated cut-off. A final clinical diagnosis of dementia based on the Diagnostic and Statistical Manual of Mental Disorders (4th edition) criteria was reached by a consensus panel.

Results

Administrative data identified 28 participants as having dementia, compared to 88 identified through clinical assessment. Administrative data had a sensitivity of 20% (95% CI: 13–30%, 18/88), specificity of 99% (95% CI: 99–100%, 1301/1312), positive predictive value (PPV) of 62% (95% CI: 44–77%), negative predictive value of 95% (95% CI: 94–95%), positive likelihood ratio of 24.4 (95% CI: 11.9–50.0) and negative likelihood ratio of 0.80 (0.72–0.89).

Conclusions

Administrative hospital and death data has limited accuracy for dementia diagnosis with poor sensitivity and PPV. The prevalence of dementia is likely underestimated using hospital and deaths data.

Peer Review reports

Introduction

An estimated 472,000 people are living with dementia in Australia in 2021, [1] with 10% of people aged ≥65 years and 30% of people aged ≥85 years having dementia [2]. In Australia, the prevalence of dementia is generally estimated using a range of data sources including routinely collected health data such as hospitalisations, deaths and other health administrative data such as pharmaceutical claims for dementia-specific medications and aged care assessments [3,4,5,6]. However, the accuracy and agreement between these sources of data are not well understood. There are two important sources of inaccuracies in administrative data for dementia diagnosis. The first relates to inconsistent coding of dementia diagnosis from health records [7]. The second relates to the inability of such data to capture dementia diagnoses in those not using health services, including those who are not aware that they have dementia and thus have not sought a diagnosis. Furthermore, comorbidities (e.g. mild dementia) may not be identified in administrative data as these health conditions may not have had an impact on the episode of care and as a result would not be coded for in the hospital admission. This is an important consideration as it has been estimated that 62% of people living with dementia in the community are undiagnosed [8]. Moreover, this undercounting of dementia with administrative data may be more pronounced in individuals who have reduced access to or use of health services such as those with lower education or income and some ethnic minority groups [9,10,11].

Ideally, the population prevalence of dementia should be evaluated in a population-representative group using the gold standard diagnostic criteria which involves screening for cognitive impairment, followed by detailed clinical assessment on screen positive patients and a final diagnosis based on consensus by a panel of clinicians [12]. Depending on the effectiveness of the initial screening, such identification of people with dementia includes those that may have otherwise gone undiagnosed [8]. However, such comprehensive studies are resource intensive and not always feasible, particularly for large population-based cohorts [13]. Some countries have established a dementia registry with the potential to better understanding of the incidence of dementia. In 2020, Australia established the Australian Dementia Network (ADNeT) Registry but it will take several years for this registry to be useful for epidemiological monitoring [14]. In the absence of a registry, using routinely collected health administrative data, may be a cost-effective approach to determine the prevalence or incidence of dementia at a population level [3, 13, 15, 16].

This study aims to examine the diagnostic accuracy of hospital and deaths data for identifying individuals with dementia compared with a clinical dementia diagnosis in an ethnically diverse community-based cohort of men aged ≥70 years in Australia.

Methods

Study population

Data analysed come from the Concord Health and Ageing in Men Project (CHAMP): a cohort study of older men in Sydney, Australia. Further details of the methodology are described elsewhere [17]. In brief, 1705 men aged 70 years and over who lived in three local government areas (Burwood, Canada Bay and Strathfield) in inner western Sydney were recruited between 28 January 2005 and 4 June 2007. The New South Wales (NSW) Electoral Roll was used as the sampling frame for CHAMP which, as voting is compulsory in Australia, provides a representative and regularly updated sampling frame. The only exclusion criterion was living in a residential aged care facility. The baseline participation rate was 47% among eligible men with whom contact was made. Ethical approval was granted by the Sydney Local Health District Human Research Ethics Committee (HERC/14/CRGH/17). Written informed consent was obtained from all participants.

Linked administrative data and dementia codes

Linked data on hospital records and deaths were obtained from the NSW Admitted Patient Data Collection (APDC) and the Australian Coordinating Registry (ACR) Cause of Death Unit Record File (COD-URF) from 1 July 2004 through the Centre for Health Record Linkage (CHeReL). The NSW APDC has data on the date of hospital admission and the assigned diagnosis codes using the International Statistical Classification of Diseases and Related Health Problems, Tenth Revision, Australian Modification (ICD-10-AM). Given there is no unique patient identifier in Australia, multiple records from the same patient are combined using a probabilistic record linkage approach [18]. The Australian Coordinating Registry (ACR) Cause of Death Unit Record File (COD UFR) provided cause of death data coded using International Classification of Diseases Tenth Revision (ICD-10).

There is no established validated time frame for assessing the diagnostic accuracy of administrative data for dementia diagnosis, and multiple different time frames have been used in past research ranging from 6 months to 2 years [19]. In this study, we defined a diagnosis of dementia using administrative data as having at least one hospital admission or death with a relevant ICD-10-AM code listed anywhere in the record in 1 year prior (hospital only) and 1 year post the date of the clinical dementia diagnosis (Table S1). This time frame was selected based on consultation with a geriatrician (LW) given the lack of a validated time frame [20]. All participants were successfully linked to the administrative databases. Participants with no hospital admission within the timeframe (474 men) were assumed to not have dementia as per standard practice [21,22,23].

Reference standard

All CHAMP participants were screened for cognitive impairment using the Mini-Mental State Examination (MMSE) [24] in the baseline clinic visit. The 16-item Short Form of the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE) [25] was also completed at baseline by the nominated contact such as a spouse or child. Participants who did not have an IQCODE completed were not excluded. Participants with an MMSE score ≤ 26 and/or IQCODE score ≥ 3.6 were invited to undergo a detailed clinical assessment for dementia by a geriatrician. The geriatrician’s assessment lasted approximately 1–2 hours and included a review of medical comorbidities and medications, a standardized neurological assessment, a clinical dementia rating based on a detailed informant interview, [26] and the Rowland Universal Dementia Assessment Scale (RUDAS) in those whose first language was not English. RUDAS has been specifically designed for culturally and linguistically diverse populations and people with low education [27,28,29]. The final clinical diagnosis (i.e. normal, mild cognitive impairment, dementia, or unknown) was reached by consensus of a panel of two geriatricians, a neurologist and a neuropsychologist based on the Diagnostic and Statistical Manual of Mental Disorders (4th edition) criteria [30]. There were some men who met the criteria for further clinical assessment for dementia by a geriatrician where a diagnosis of dementia status was not reached. This was due to either (a) being unavailable to undergo further detailed assessment or (b) the panel being unable to reach a final consensus diagnosis based on the information available in those who had undergone further assessment. In both of these cases, such men were classified as having ‘unknown’ dementia status. The clinical diagnoses in CHAMP and dementia diagnoses reported in medical records were made blind to each other.

Diagnosis of dementia based on administrative data

The primary outcome was based on administrative data records covering the 1 year prior (hospital data only) and the 1 year post the date of the clinical dementia diagnosis. The date of diagnosis was either the date of the baseline clinic assessment in those who screened negative based on the MMSE and IQCODE scores or was the date of the follow up clinical assessment in those who screened positive. In the primary analysis, we categorised individuals with a dementia status based on the clinical assessment as ‘unknown’ or ‘mild cognitive impairment’ as ‘no dementia’ for the diagnostic accuracy assessment.

Statistical analyses

Descriptive statistics such as frequency, proportion, mean and standard deviation of the demographic characteristics were calculated and classified by clinically diagnosed dementia status (i.e. reference standard). The two-sample t-test was performed to compare the mean IQCODE between groups.

Six measures of diagnostic accuracy and their 95% confidence intervals (CI) were calculated to determine the accuracy of hospital and death records over a 2 year time frame (1 year pre and 1 year post the clinical diagnosis): sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), positive likelihood ratio, and negative likelihood ratio [31].

Several sub-group and sensitivity analyses were performed in addition to the primary analysis. First, a sensitivity analysis for accuracy of diagnostic categories was undertaken by reclassifying the ‘unknown’ category of dementia diagnosis by (1) excluding all with an ‘unknown’ clinical diagnosis of dementia; or (2) grouping ‘unknown’ into the ‘dementia’ rather than ‘no dementia’ category. Second, different time frames for the administrative data records were used given the lack of an established and validated time frame in past research. The time frames were administrative data records covering (1) pre 6 months (hospital data only) and post 6 months; (2) only post 6 months; (3) only post 1 year; and (4) only post 2 years of the clinical dementia assessment. Both hospital and death data were used for the post-period. Third, we stratified the primary outcome by ethnicity using the following categories of the self-reported country of birth: (a) Australian-born (predominantly Anglo-Celtic background), (b) Italian or Greek migrants, and (c) all other migrant groups to test the hypothesis that accuracy of administrative data would vary between ethnic minority groups. Fourth, the accuracy of self-reported dementia in the CHAMP study was also compared to the reference standard, as well as combining self-reported dementia with pre and post 1 year linked administrative data.

All statistical analyses were performed in Stata (version 17, College Station, Texas, USA). This manuscript is reported as per the Standards for Reporting of Diagnostic Accuracy Studies (STARD) 2015 Guidelines (Table S2–3) [32].

Results

There were 1705 men recruited in the CHAMP at baseline, of whom 1639 (96%) consented to data linkage to administrative health records (Fig. 1). Linked data were available from 1 July 2004 to 4 May 2009 (i.e. 2 years after the date of the last enrolment in the CHAMP study). Therefore, participants who received their clinical dementia diagnosis or assessment before 1 July 2005 did not have a full year of linked administrative data available and were excluded from the primary analysis (i.e. pre and post 1 year linked administrative data). Of the 1400 men included in the primary analysis, 401 (29%) met the screening criteria and underwent additional assessment, 88 (6%) had a clinical diagnosis of dementia at baseline and 1312 (94%) men were categorised as having ‘no dementia’ (Table 1). Men without dementia were younger than men with dementia. Men without dementia were more often Australian-born than men with dementia (50% vs 40%) whereas the proportions of Italian or Greek migrants were similar but more other migrants were in the dementia group compared to the no dementia group (32% vs 26%). Men with dementia were more likely to live alone and less likely to have learnt English before the age of 12 years. Marital status was similar in both groups. Of the 88 men with dementia, 19 (22%) self-reported they had dementia, 67 (76%) self-reported no dementia and two (2%) were unsure. However, of the 1312 men without dementia, 18 (1%) self-reported that they had dementia.

Fig. 1
figure 1

Flow chart of CHAMP recruitment and study subjects included in the present study

Table 1 Demographic characteristics among 1400 men from CHAMP, by clinically diagnosed dementia status

There were 29 men with a dementia diagnosis using pre and post 1 year linked administrative data; of those 18 men had a clinical diagnosis of dementia, resulting in a PPV of 62% (95% CI: 44–77%) (Fig. 2, Table S4). Of the 88 men who had a clinical diagnosis of dementia, only 18 men had a dementia diagnosis from linked administrative data, giving a sensitivity of 20% (95% CI: 13–30%). The specificity of using pre and post 1 year linked administrative data was 99% (95% CI: 99–100%) and the NPV was 95% (95% CI: 94–95%). The mean IQCODE for the 18 true-positive cases was 4.3 (SD = 0.7), which was slightly higher than the 11 false-positive cases (3.8 [SD = 0.5], p = 0.024) and the 70 false-negative cases (3.7 [SD = 0.6], p < 0.001) but there was no difference in MMSE scores between groups. There were 474 (29%) men who did not have a hospitalisation or death record during the study period who were assumed to not have dementia. However, there was no significant difference in the mean number of hospital admissions among men who had a dementia record compared to men who did not have a dementia record (4.9 [SD 10.4] vs 2.2 [SD 11.3], p = 0.198).

Fig. 2
figure 2

Sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood ratio and negative likelihood ratio for administrative data for dementia diagnosis using different time frames compared to a clinical dementia diagnosis

The primary analysis shows that there were 11 false-positive cases identified as positive cases from the linked administrative data that were not identified as positive cases based on the clinical diagnosis we used as the reference standard: seven were men of unknown dementia status, four were men who were cognitively normal and none of them had mild cognitive impairment (Table S5). All seven men with unknown dementia status were screened positive for dementia based on MMSE (mean = 20.7) and IQCODE (mean = 4.0) but five of the men were unable to complete a follow up clinical assessment. If all 130 men with unknown dementia status were excluded, the PPV improved to 82% (95% CI: 61–93%) (Table S6, Fig. S1). Furthermore, if we assumed these 130 men with unknown dementia status had dementia, the PPV improved slightly further to 86% (95% CI: 69–95%) but the sensitivity dropped to 11% (95% CI: 8–16%) (Table S7, Fig. S2).

The sensitivity analyses using different time frames for the administrative data records were conducted among all 1639 men and the demographic characteristics of this larger sample were similar to the 1400 men in the primary analysis (Table S8). Figure 2 shows that sensitivity increased with a longer time frame for administrative data but remained low (ranging from 12% using post 6 months administrative data to 23% using post 2 years administrative data). The specificity remained high regardless of the time frame used (> 98%). The positive likelihood ratios for all time frames ranged from 12.2 to 31.2 and the negative likelihood ratios for all time frames ranged from 0.79 to 0.88.

Italian or Greek migrants had a lower sensitivity (12%; 95% CI: 3–31%) compared to the Australian-born (20%; 95% CI: 8–37%) and other migrants (29%; 95% CI: 13–50%) but the differences were not statistically significant with wide and overlapping confidence intervals (Table 2).

Table 2 Sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood ratio and negative likelihood ratio for pre and post 1 year of administrative data for dementia diagnosis, stratified by country of birth

Adding self-reported dementia did not improve the diagnostic accuracy of the linked data diagnosis. There were small changes to the positive and negative likelihood ratios but not enough to change the clinical meaningfulness of the tests. For example, the negative likelihood ratio for no self-reported and no linked data diagnosis was not small enough to be useful in ruling out the presence of dementia (Tables S9–10).

Discussion

Our findings suggest that routinely collected administrative hospital and death data has limited accuracy for dementia diagnosis. The poor sensitivity of 20% suggests that hospital and death data can only identify one-fifth of men with dementia, suggesting that the majority of men with dementia are not being captured in hospital or death data. Moreover, increasing the time frame of the administrative data does not greatly improve sensitivity. Hospital and death data did demonstrate relatively high specificity (> 98%) and NPV (> 95%). In addition, as the positive likelihood ratios for all time frames were all greater than 10, [33] a hospital and death data diagnosis is useful for ruling in a diagnosis of dementia. However, as the negative likelihood ratios for all time frames were greater than 0.1, [34] the lack of an administrative data diagnosis cannot rule out dementia.

Wilkinson et al. published a systematic review of 27 studies summarising the sensitivity of using routinely collected health data for identifying all-cause dementia [3]. However, most of the included studies with high sensitivity used medical chart review as the reference standard instead of clinical diagnosis. Using medical chart review as the reference standard to assess the accuracy of linked data is an inappropriate approach because the reference standard (medical chart review) is not independent of the index diagnostic test (dementia diagnosis codes) given that the dementia diagnosis codes are based on medical records. Moreover, medical chart review would not identify individuals with milder dementia who may not be screened or diagnosed yet and thus are not identified by administrative data.

In Wilkinson et al.’s review, only two studies used clinical diagnosis with a consensus panel of clinicians as a reference standard, which is similar to our current study [35, 36]. The estimated sensitivity in our study (21%) is similar to the estimate reported in the CAIDE (Cardiovascular Risk Factors, Aging and Dementia) in Finland using hospital discharge register records (14%) [35]; but it is lower than the estimate reported in the ADAMS (Aging, Demographics, and Memory Study) study in the US using Medicare claims (85%), which includes inpatient hospital claims, home health, and use of a skilled nursing facility. The inclusion of data on services outside of hospitals is likely behind the greater sensitivity observed in the ADAMS study. In Australia, in addition to hospital and death data, there are several other routine datasets that can be used to identify individuals with dementia, including pharmaceutical claims, aged care assessments, and the aged care funding instrument (ACFI) [4]. A previous Australian study used these five datasets to identify people with dementia and found that almost half of the people with dementia were identified in only one dataset [4]. The majority were from hospital records (32%), followed by ACFI (24%), pharmaceutical claims (23%), aged care assessments (17%) and death records (4%), [4] suggesting using multiple data sources would improve accuracy [13]. This is also supported by a Canadian study showing high sensitivity (79%) and PPV (80%) using an administrative data algorithm to identify dementia compared to electronic medical records by family physicians [37]. Unfortunately, we did not have access to these additional data sources in our study. However, it is still unlikely that such an algorithm would capture the same high number of people with dementia from screening and clinical diagnosis methods used in a community-based cohort study. In this design, it is possible to identify people who may be unaware of their dementia diagnosis and as a result not seeking health care.

The major strength of this study is that all men were screened for dementia with a final diagnosis based on clinical assessments and a consensus panel. The ethnic diversity of our sample is another strength as it is estimated that one in three Australians aged ≥65 years are migrants, with many born in a non-English speaking country [38]. We did not find strong evidence that ethnic background impacted the accuracy of dementia diagnosis using administrative data in our study. However, the small number of dementia cases in our community-based study meant that our comparison between ethnic groups was underpowered. Our study also has several limitations. First, we were only able to link hospital and death records, contributing to low sensitivity [39]. Second, we only included men in this study and the accuracy of using administrative data or self-reported dementia may be different between men and women [40]. Third, selection bias might have occurred in the CHAMP study due to the exclusion of those living in a residential aged care facility who would be more likely to have dementia [41]. Fourth, given the low prevalence of dementia (5.6%) in the community-based sample, we did not have sufficient power to examine whether the accuracy of administrative data varies according to demographic characteristics (e.g. socioeconomic status). Further studies using a larger sample size or an older cohort where dementia is more prevalent may be required. Fifth, MMSE scores can be influenced by culture, ethnicity and language, [42] which may cause bias in this ethnically diverse cohort. However, the final clinical diagnosis did use the results of additional assessments including the Rowland Universal Dementia Assessment Scale (RUDAS) in culturally and linguistically diverse men. Sixth, about 50% of the participants in our study were migrants. Our findings may be biased if these migrants are less likely to access healthcare services. Seventh, we might have underestimated the diagnostic accuracy of linked administrative data for dementia in this study as only health conditions affecting the episode of care are coded in APDC data and hence some men with dementia might not be identified.

To conclude, the sensitivity and PPV of linked administrative data are low using clinical diagnosis as the reference standard in an ethnically diverse community-based cohort of men. There has been increased use of administrative data in research, and estimates of the prevalence of dementia in Australia are based on administrative data given that there is limited cohort study data available. The services needed to provide support and care for men living with dementia are likely to be under-estimated if estimates are based on linked hospital and death data.

Availability of data and materials

The datasets generated and/or analyzed during the current study are not publicly available due to ethical reasons but are available from the corresponding author on reasonable request.

References

  1. Dementia prevalence data [https://www.dementia.org.au/information/statistics/prevalence-data].

  2. Brown L, Hansnata E, La HA. Economic cost of dementia in Australia 2016-2056: report prepared for Alzheimer’s Australia. Canberra: The National Centre for Social and Economic Modelling (NATSEM), University of Canberra; 2017.

    Google Scholar 

  3. Wilkinson T, Ly A, Schnier C, Rannikmae K, Bush K, Brayne C, et al. Group UKBNO, dementias platform UK: identifying dementia cases with routinely collected health data: a systematic review. Alzheimers Dement. 2018;14(8):1038–51.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Welberry HJ, Brodaty H, Hsu B, Barbieri S, Jorm LR. Measuring dementia incidence within a cohort of 267,153 older Australians using routinely collected linked administrative data. Sci Rep. 2020;10(1):8781.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  5. Waller M, Mishra GD, Dobson AJ. Estimating the prevalence of dementia using multiple linked administrative health records and capture-recapture methodology. Emerg Themes Epidemiol. 2017;14:3.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Cations M, Lang C, Ward SA, Crotty M, Inacio MC. Dementia case ascertainment using aged care assessment data. Aust N Z J Public Health. 2020;44(6):517–8.

    Article  PubMed  Google Scholar 

  7. Australia’s health 2020: data insights [https://www.aihw.gov.au/getmedia/be95235d-fd4d-4824-9ade-34b7491dd66f/aihw-aus-231.pdf.aspx?inline=true].

  8. Lang L, Clifford A, Wei L, Zhang D, Leung D, Augustine G, et al. Prevalence and determinants of undetected dementia in the community: a systematic literature review and a meta-analysis. BMJ Open. 2017;7(2):e011146.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Bohensky MA, Jolley D, Sundararajan V, Evans S, Pilcher DV, Scott I, et al. Data linkage: a powerful research tool with potential problems. BMC Health Serv Res. 2010;10:346.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Yaffe K, Falvey C, Harris TB, Newman A, Satterfield S, Koster A, et al. Effect of socioeconomic disparities on incidence of dementia among biracial older adults: prospective study. BMJ. 2013;347:f7051.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Health across socioeconomic groups [https://www.aihw.gov.au/reports/australias-health/health-across-socioeconomic-groups].

  12. Bennett DA, Schneider JA, Aggarwal NT, Arvanitakis Z, Shah RC, Kelly JF, et al. Decision rules guiding the clinical diagnosis of Alzheimer's disease in two community-based cohort studies compared to standard practice in a clinic-based cohort study. Neuroepidemiology. 2006;27(3):169–76.

    Article  PubMed  Google Scholar 

  13. Sibbett RA, Russ TC, Deary IJ, Starr JM. Dementia ascertainment using existing data in UK longitudinal and cohort studies: a systematic review of methodology. BMC Psychiatry. 2017;17(1):239.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Religa D, Fereshtehnejad SM, Cermakova P, Edlund AK, Garcia-Ptacek S, Granqvist N, et al. SveDem, the Swedish dementia registry - a tool for improving the quality of diagnostics, treatment and care of dementia patients in clinical practice. PLoS One. 2015;10(2):e0116538.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Clarke P, Leal J, Kelman C, Smith M, Colagiuri S. Estimating the cost of complications of diabetes in Australia using administrative health-care data. Value Health. 2008;11(2):199–206.

    Article  PubMed  Google Scholar 

  16. Gianattasio KZ, Wu Q, Glymour MM, Power MC. Comparison of methods for algorithmic classification of dementia status in the Health and retirement study. Epidemiology. 2019;30(2):291–302.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Cumming RG, Handelsman D, Seibel MJ, Creasey H, Sambrook P, Waite L, et al. Cohort profile: the Concord Health and ageing in men project (CHAMP). Int J Epidemiol. 2009;38(2):374–8.

    Article  PubMed  Google Scholar 

  18. Sayers A, Ben-Shlomo Y, Blom AW, Steele F. Probabilistic record linkage. Int J Epidemiol. 2016;45(3):954–64.

    Article  PubMed  Google Scholar 

  19. Shrestha S, Dave AJ, Losina E, Katz JN. Diagnostic accuracy of administrative data algorithms in the diagnosis of osteoarthritis: a systematic review. BMC Med Inform Decis Mak. 2016;16:82.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Henderson T, Shepheard J, Sundararajan V. Quality of diagnosis and procedure coding in ICD-10 administrative data. Med Care. 2006;44(11):1011–9.

    Article  PubMed  Google Scholar 

  21. Bu F, Abell J, Zaninotto P, Fancourt D. A longitudinal analysis of loneliness, social isolation and falls amongst older people in England. Sci Rep. 2020;10(1):20064.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  22. Wilkinson T, Schnier C, Bush K, Rannikmae K, Henshall DE, Lerpiniere C, et al. Identifying dementia outcomes in UK biobank: a validation study of primary care, hospital admissions and mortality data. Eur J Epidemiol. 2019;34(6):557–65.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Ylilauri MP, Voutilainen S, Lonnroos E, Mursu J, Virtanen HE, Koskinen TT, et al. Association of dietary cholesterol and egg intakes with the risk of incident dementia or Alzheimer disease: the Kuopio Ischaemic heart disease risk factor study. Am J Clin Nutr. 2017;105(2):476–84.

    Article  PubMed  CAS  Google Scholar 

  24. Folstein MF, Folstein SE, McHugh PR. "Mini-mental state". A practical method for grading the cognitive state of patients for the clinician. J Psychiatr Res. 1975;12(3):189–98.

    Article  PubMed  CAS  Google Scholar 

  25. Jorm AF. A short form of the informant questionnaire on cognitive decline in the elderly (IQCODE): development and cross-validation. Psychol Med. 1994;24(1):145–53.

    Article  PubMed  CAS  Google Scholar 

  26. Waite L, Grayson D, Jorm AF, Creasey H, Cullen J, Bennett H, et al. Informant-based staging of dementia using the clinical dementia rating. Alzheimer Dis Assoc Disord. 1999;13(1):34–7.

    Article  PubMed  CAS  Google Scholar 

  27. Rowland JT, Basic D, Storey JE, Conforti DA. The Rowland universal dementia assessment scale (RUDAS) and the Folstein MMSE in a multicultural cohort of elderly persons. Int Psychogeriatr. 2006;18(1):111–20.

    Article  PubMed  Google Scholar 

  28. Naqvi RM, Haider S, Tomlinson G, Alibhai S. Cognitive assessments in multicultural populations using the Rowland universal dementia assessment scale: a systematic review and meta-analysis. CMAJ. 2015;187(5):E169–75.

    Article  PubMed  PubMed Central  Google Scholar 

  29. Goudsmit M, van Campen J, Schilt T, Hinnen C, Franzen S, Schmand B. One size does not fit all: comparative diagnostic accuracy of the Rowland universal dementia assessment scale and the Mini mental state examination in a memory clinic population with very low education. Dement Geriatr Cogn Dis Extra. 2018;8(2):290–305.

    Article  PubMed  PubMed Central  Google Scholar 

  30. American Psychiatric Association. Diagnostic and statistical manual of mental disorders. In: Text Revision (DSM-IV-TR). 4th ed. Virginia: American Psychiatric Association; 2000.

    Google Scholar 

  31. Shreffler J, Huecker MR. Diagnostic Testing Accuracy: Sensitivity, Specificity, Predictive Values and Likelihood Ratios. Treasure Island (FL): StatPearls; 2021.

    Google Scholar 

  32. Cohen JF, Korevaar DA, Altman DG, Bruns DE, Gatsonis CA, Hooft L, et al. STARD 2015 guidelines for reporting diagnostic accuracy studies: explanation and elaboration. BMJ Open. 2016;6(11):e012799.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Akobeng AK. Understanding diagnostic tests 2: likelihood ratios, pre- and post-test probabilities and their use in clinical practice. Acta Paediatr. 2007;96(4):487–91.

    Article  PubMed  Google Scholar 

  34. Deeks JJ, Altman DG. Diagnostic tests 4: likelihood ratios. BMJ. 2004;329(7458):168–9.

    Article  PubMed  PubMed Central  Google Scholar 

  35. Solomon A, Ngandu T, Soininen H, Hallikainen MM, Kivipelto M, Laatikainen T. Validity of dementia and Alzheimer's disease diagnoses in Finnish national registers. Alzheimers Dement. 2014;10(3):303–9.

    Article  PubMed  Google Scholar 

  36. Taylor DH Jr, Ostbye T, Langa KM, Weir D, Plassman BL. The accuracy of Medicare claims as an epidemiological tool: the case of dementia revisited. J Alzheimers Dis. 2009;17(4):807–15.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Jaakkimainen RL, Bronskill SE, Tierney MC, Herrmann N, Green D, Young J, et al. Identification of physician-diagnosed Alzheimer's disease and related dementias in population-based administrative data: a validation study using family Physicians' electronic medical records. J Alzheimers Dis. 2016;54(1):337–49.

    Article  PubMed  Google Scholar 

  38. Older Australia at a glance (Cat. no. AGE 87) [https://www.aihw.gov.au/reports/older-people/older-australia-at-a-glance/contents/demographics-of-older-australians].

  39. Cummings E, Maher R, Showell CM, Croft T, Tolman J, Vickers J, et al. Hospital coding of dementia: is it accurate? Health Inf Manag. 2011;40(3):5–11.

    PubMed  Google Scholar 

  40. Jin YP, Gatz M, Johansson B, Pedersen NL. Sensitivity and specificity of dementia coding in two Swedish disease registries. Neurology. 2004;63(4):739–41.

    Article  PubMed  Google Scholar 

  41. Dyer SM, Gnanamanickam ES, Liu E, Whitehead C, Crotty M. Diagnosis of dementia in residential aged care settings in Australia: an opportunity for improvements in quality of care? Australas J Ageing. 2018;37(4):E155–8.

    Article  PubMed  PubMed Central  Google Scholar 

  42. Escobar JI, Burnam A, Karno M, Forsythe A, Landsverk J, Golding JM. Use of the Mini-mental state examination (MMSE) in a community population of mixed ethnicity. Cultural and linguistic artifacts. J Nerv Ment Dis. 1986;174(10):607–14.

    Article  PubMed  CAS  Google Scholar 

Download references

Acknowledgments

None.

Funding

The CHAMP study is funded by the National Health and Medical Research Council of Australia (project grant no. 301916) and the Ageing and Alzheimer’s Institute and the Sydney Medical School Foundation. This work was supported by the Australian Research Council Centre of Excellence in Population Aging Research (Project number CE170100005). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

EPFC is supported by an Australian National Health and Medical Research Council (NHMRC) Emerging Leadership Investigator Grant (GNT1172873).

Author information

Authors and Affiliations

Authors

Contributions

FS designed and conceived the study with input from BH and LW. EPFC performed the data analyses and wrote the first draft of the manuscript. BH performed the data linkage. All authors were involved in data interpretation, commented on manuscript drafts, and contributed to the final version of the manuscript. The authors read and approved the final manuscript.

Corresponding author

Correspondence to Eric P. F. Chow.

Ethics declarations

Ethics approval and consent to participate

This study was reviewed and approved by the Sydney Local Health District Human Research Ethics Committee (HERC/14/CRGH/17). Participants were provided study related information using the approved patient information sheet. All methods were carried out in accordance with the relevant guidelines and regulations. Written informed consent was obtained from all participants.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chow, E.P.F., Hsu, B., Waite, L.M. et al. Diagnostic accuracy of linked administrative data for dementia diagnosis in community-dwelling older men in Australia. BMC Geriatr 22, 858 (2022). https://doi.org/10.1186/s12877-022-03579-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12877-022-03579-2

Keywords