Skip to main content
  • Research article
  • Open access
  • Published:

Clinically useful prediction of hospital admissions in an older population

Abstract

Background

The healthcare for older adults is insufficient in many countries, not designed to meet their needs and is often described as disorganized and reactive. Prediction of older persons at risk of admission to hospital may be one important way for the future healthcare system to act proactively when meeting increasing needs for care. Therefore, we wanted to develop and test a clinically useful model for predicting hospital admissions of older persons based on routine healthcare data.

Methods

We used the healthcare data on 40,728 persons, 75–109 years of age to predict hospital in-ward care in a prospective cohort. Multivariable logistic regression was used to identify significant factors predictive of unplanned hospital admission. Model fitting was accomplished using forward selection. The accuracy of the prediction model was expressed as area under the receiver operating characteristic (ROC) curve, AUC.

Results

The prediction model consisting of 38 variables exhibited a good discriminative accuracy for unplanned hospital admissions over the following 12 months (AUC 0.69 [95% confidence interval, CI 0.68–0.70]) and was validated on external datasets. Clinically relevant proportions of predicted cases of 40 or 45% resulted in sensitivities of 62 and 66%, respectively. The corresponding positive predicted values (PPV) was 31 and 29%, respectively.

Conclusion

A prediction model based on routine administrative healthcare data from older persons can be used to find patients at risk of admission to hospital. Identifying the risk population can enable proactive intervention for older patients with as-yet unknown needs for healthcare.

Peer Review reports

Background

With an increase in the aging population worldwide, older age is generally associated with increased health-related needs and increased healthcare costs – but not by as much as previously expected [1]. Nevertheless, the association with both healthcare utilization and costs varies [2, 3] and in some high-income countries healthcare costs per person actually fall significantly after the age of 75 [4, 5]. Differences in provider systems, in the management of frail older people and in cultural norms, particularly near the time of death, may contribute to the fact that the association between age and healthcare costs is also strongly influenced by the healthcare system itself [1].

Even though the future challenges for the healthcare system due to an aging population might have been exaggerated, the present healthcare situation for the elderly population in many countries is insufficient and not designed according to their healthcare needs [6]. The healthcare of the aging population relates to morbidity, multi-morbidity and frailty [7]. But, at the same time, several reports indicate that a majority of the aged population is satisfied with their health (see [8]), manage life at home and consider themselves to be healthy [9, 10]. Only a minority of the aged population needs hospital care. In most cases, the healthcare system does not separate the heterogeneous old-age population, but rather organizes both hospital and primary care using a passive and reactive (acting when symptoms or problems occur) approach.

In order to detect elderly people with significant care needs (hospital care), there have been many attempts to define “frail” older people [11,12,13]. In this context, however, scales used for the prediction of persons in need of healthcare, some of which are frail, exhibit some major shortcomings. Firstly, “frailty” is not an easily defined medical condition for which there is a consensus on its operational definition [13,14,15,16]. Secondly, and from a clinical perspective more importantly, evaluation using clinical instruments requires trained staff for each individual evaluation and is not always easily applied within a broader clinical context where a primary geriatric perspective may not always be present (primary care, acute ward disciplines). A final limitation of the use of “frailty” scales in a wider clinical context is the fact that most elderly people (75% of 80+) seem to manage themselves at home, despite multi-morbidity and frailty. This was indicated in two separate studies on 85-year-olds (England, Sweden), concluding similar pictures of health and aging [9, 10]. A majority (> 75%) of the studied 85-year-olds managed their lives at home, rated themselves as healthy (80% rated their health good to excellent) and seldom used hospital care. Only \( \frac{1}{4}-\frac{1}{3} \) of the aged population appeared to be high consumers of healthcare. These facts underline the difficulty of managing healthcare in an aged community. Our ability to detect individuals with possible needs, and to direct the care resources specifically towards those with greatest need of care prior to hospitalization, is not optimal.

Statistical or digital prediction models have been suggested as an evidence-based method to identify or select older persons in greater need of healthcare [17]. Earlier studies indicated that administrative data are useful in the prediction of hospital care [18], also for older adults in a group health cooperative [19]. More recently the use of a use of electronic administrative data to identify older community dwelling adults at high risk for hospitalization demonstrated good accuracy (AUC 0.678) [20]. In the present study we wanted to investigate a larger county population not limited to health insurance systems or other selection factors, to see whether we could develop a digital prediction model for older adults at high risk for hospital care that can be used in routine healthcare. If this group of elderly could be identified, proactive healthcare activities can be considered before hospital care takes place [21]. And some persons in need of hospital care could be directed to an appropriate clinic for care, instead of using the emergency care system.

Methods

This prediction model study is reported in accordance with the TRIPOD checklist [22].

Aim, design, setting and population

The aim was to develop and test a clinically useful model for predicting hospital admissions of older persons based on routine healthcare data. This is a prospective cohort study that included all residents aged 75–109 years in the county of Östergötland (n = 40,728) located in the south-east of Sweden. This age group constitutes 9.6% of the population, close to the national proportion of 9.2%. In the county of Östergötland, healthcare for the elderly is provided mainly by 43 healthcare centres in primary care and four hospitals, one of which is the University Hospital of Linköping.

Data source and study variables

The 12-month data were obtained between November 2015 and October 2016 from the computerized information system of the County Council of Östergötland, where statistics for all healthcare in the county are stored. For example, for the whole population there are records of the number of visits to primary or hospital care, number of days in hospital, diagnostic codes for each visit etc. We used unplanned in-ward hospital stays between November 2016 and October 2017 as the dependent variable. Several time periods were tested and the predicted cases were included in a intervention study [21]. We included number of physician visits, number of non-physician visits (to nurses, occupational therapists or physiotherapists), number of previous in-ward hospital stays, number of emergency room (ER) visits, age, gender and International Classification of Diseases, and 10th Revision, (ICD10)-codes grouped by two digits. For each diagnosis, two variables were constructed, one based on open-clinic visits and one based on hospital visits. To get good precision in the estimation of the coefficients and to get a reliable model over time, variables with number of observations less than 40 were excluded. All diagnosis variables were dichotomized into yes or no. People who died during the following prediction period were included in the analysis.

Model developing

The data was randomly divided into two halves, a training data set and a validation data set. The training set was used to build a prediction model and the validation set was used to validate this model. The prediction model algorithm was developed using multivariable logistic regression (LR) with forward selection) (see statistics below). The aim was to identify participants aged 75 or older who are likely to be hospitalized within the next 12 months.

Statistical analysis and external validation

The first step was to calculate the univariable association for each variable with 12-months unplanned hospital admission. Because of large number of observations that could result in statistical significance for rather weak associations, only variables with p-values less than 0.001 was further included in the multivariable analysis.

Multivariable logistic regression was then used to identify significant factors predictive of unplanned hospital admission over a 12-month period. The model-building process consisted of three steps: selecting the variables, building the model, and validating the model. The best model was assessed by change in Akaike information criterion. A penalty factor of five was used to avoid overfitting and to reduce the number of variables in the final model. Collinearity was observed by calculating variance inflation factor for each variable in the final model and variables with a value above five were excluded. After the final model was made some further test was done in an attempt to further improve the model. First, we tested all 2-way interactions. Further, we tested to log-transform all numerical variables. Finally, we tested non-linearity for numerical variables by using restricted cubic splines. If an improvement in AUC was not achieved, the simplest model was chosen because we wanted a robust model that was easy to implement. Risk scores were calculated for all individuals.

Model performance measures: Overall discrimination was assessed using c-statistic, a measure of goodness of fit for binary outcomes in a logistic regression model. The area under the receiver operating characteristic (ROC) curve (AUC) is used to quantify the binary outcomes (hospital admission or not). The ROC curve is continually plotting every ideally possible sensitivity versus specificity across all threshold cut-off points. AUC reflects the accuracy of the predictive models and can be compared among the different models. AUC 0.5 means the model has no discrimination (the proportions of true cases and false positive cases are equal) whereas AUC 1.0 means the model has a perfect discrimination [23]. Five different sensitivity analyses were performed to assess how the prediction model changed in different settings. The first model included both unplanned and planned hospital admissions, the second model excluded people who died within the 12-month follow-up period and in the last two models, different follow-up periods 3-, and 6 months was tested. Lastly, we tested the least absolute shrinkage and selection operator (lasso) as an alternative selection method.

External validation was also performed in two additional data sets. One using the same time period as above but including ages 65–74 (n = 51,104). And another using the age group 75+ for year 2012 for prediction of unplanned hospital admission the following 12 months (n = 38,121).

All statistics were performed using R version 3.5.2 (R Core Team, Vienna, Austria). The Modern Applied Statistics with S (MASS) package was used for fitting the logistic model and the pROC package was used for estimating the AUC. The Lasso and Elastic-Net Regularized Generalized Linear Models (glmnet) package was used for fitting the lasso model. The Regression Modeling Strategies (rms) package was used for analysing with restricted cubic splines.

Ethical aspects

The study has been subject to ethical evaluation and was approved by the regional ethical review board in Linköping (Dnr 2016/347–31).

Results

In total, 40,728 individuals aged 75 years or older (57.7% women) were registered in the database. The demographic characteristics of these and their use of unplanned hospital care within 12-month subsequent period is given in Table 1. Even though the number of cases admitted to hospital (unplanned) decreased across the ages of 75 to 90+, the relative proportions of those in hospital increased (from 15 to 28%). Thus, it is more likely that a person 90+ years of age is admitted to hospital than a person aged 75–79.

Table 1 Characteristics of the population ≥ 75–109 years in relation to unplanned hospital admissions

In total, 650 variables were available for analysis where 233 showed a statistically significant (p < 0.001) association with 12-month unplanned hospital admission in the training data set. Table 2 presents the 20 most significant variables from the univariable analyses. The results from the multivariable final predictive model are presented in Table 3. The AUC of hospital admission over the subsequent 12 months was 0.69 (95% CI: 0.68–0.70) in the validation data set (Fig. 1). The best prediction variables were number of emergency-room visits, age, number of non-physician visits and number of physician visits, which alone resulted in an AUC of 0.67 (95% CI: 0.66–0.68). No collinearity problem existed as the highest variance inflation factor was 2.1 for number of emergency room visits. We found statistically significant interactions between number of emergency room visits and number of physician visits, between number of emergency room visits and previous inpatient care and between number of emergency room visits and number of non-physician visits. However, the effects were very small and we could not improve the AUC in the final model. Neither could log-transformation of the numerical variables improve AUC. We found evidence of non-linearity for age and number of emergency room visits, but the non-linearity components were quite small and we could not improve the AUC. Because AUC was not improved, we decided to select the final model without further alterations.

Table 2 The twenty most significant variables predicting the risk for unplanned admission to hospital
Table 3 The final predictive model from the multivariable logistic regression together with odds ratios (OR) and 95% confidence intervals (CI)
Fig. 1
figure 1

The ROC curve for predicting unplanned hospitalization derived from logistic regression using the validation data set (n = 20,364). Area under ROC curve (AUC) = 0.69, (95% CI 0.68–0.70)

Outcome using different proportions of predicted cases and different time periods

The outcome of the case-finding model varies depending on the risk score used, with low-risk scores (cut-off value) including a large sample and high-risk scores resulting in a more targeted sample. The choice of risk score level is important in clinical practice since it will affect the proportion of predicted cases (Table 4). It is apparent that an increase in the cut-off value rapidly decreases the number of predicted cases and results in a corresponding loss of sensitivity. An important perspective from a clinical point of view is to decide on a manageable proportion of the predicted population that still enables a clinically meaningful sensitivity. As shown in Table 4, predicted proportions of 40 or 45% result in sensitivities of 62 and 66%, respectively. Using a 40% predicted population, we then investigated how different outcome periods would affect the quality of the predictions.

Table 4 Falling proportions of predicted cases and corresponding cut-off values on a validation data set (n = 20,364)

Sensitivity analysis

The main prediction model was based on unplanned hospital admissions (n = 8167), but a model including both planned and unplanned hospital admission (n = 9354) resulted in an AUC of 0.68 (95% CI: 0.67–0.69). The variables in the two models were almost identical and 85% of the variables in the planned/unplanned model was included in the unplanned model. Also, a model based on unplanned hospital admission excluding 2166 people who died within the 12 months follow up period was created resulted in an AUC of 0.67 (95% CI: 0.66–0.68). Excluding people resulted in a lower AUC but the model was similar to the main prediction model and 80% of the variables was present in the main prediction model. Two different time intervals were created based on unplanned hospital admission, where 3- (n = 2503) and 6-month (n = 4664) follow-up models resulted in AUC of 0.70 (95% CI: 0.68–0.71), and 0.69 (95% CI: 0.68–0.70), respectively. Using the lasso method did not improve the AUC (0.69 (95% CI: 0.68–0.70)) compared with the stepwise procedure method.

External validation

The main prediction model was also tested on two external samples for unplanned hospital admission over the 12 following months. Using the same time period as above for data collection (2015/2016), but for the age group 65–74 (n = 51,104) the AUC was 0.68 (95% CI: 0.67–0.69). Using the age group 75 years and older, but for another time point (2012) (n = 38,121), the AUC was also 0.68 (95% CI: 0.67–0.69).

Discussion

We used administrative routine healthcare data in order to develop a prediction model for unplanned admissions of older persons to hospital. Emergency-room visits, age, number of non-physician visits and number of physician visits were the most important variables for the model. The addition of the other 33 variables only slightly increased the AUC. The different sensitivity analyses showed similar AUC. The absence of larger impact by different medical diagnoses on the accuracy of the model, can be explained by the fact that the use of the healthcare system is the ultimate consequence of all diagnoses.

Strengths and limitations

The main strength of this study in comparison to earlier smaller and more selected studies is the large population including all inhabitants 75 years or older in a county without selection factors like insurance system or specific care providers [19, 20]. The validity of a prediction tool is crucial for its possible usefulness in a broader clinical context [22] e.g. in other countries with similar structures for administrative healthcare data. It may be a weakness of the study that we were unable to include data from other counties or countries. But the external validity of our model was corroborated in two external samples, one using a different time period and one using a younger age group. Another limitation of the model is the lack of socio-economic and socio-demographic data, data not available in the administrative health care data. But considering that the important variables of the model as well its accuracy are strikingly corresponding to a study in an American context supports the validity of the model [19]. There are other risk adjustment-measures for hospitalization, but the AUC values are in the same range as reported in our study [18]. Since the outcome (accuracy) of our model is also in the same range as (or better than) studies in other countries and using similar, but not identical, settings, we modestly assume our data to be generalizable [24].

Use of the model in a clinical context

High accuracy (expressed as c-statistics) is to be expected for diagnostic tests like medical imaging or polygraph lie detection, but in mores complex settings, like some types of weather forecasting, c-statistics may in fact turn out to be 0.6–0.7 [23]. In a complex system with healthcare of “frail elderly” or “older persons with multi-morbidity” prediction of hospitalization of a population without a clear clinical definition (it is unlikely to obtain accuracy measures much higher than that. The accuracy expectations in a complex clinical context must be reasonable, in order to use the predictive tool in a clinically meaningful way. In a clinical context, sensitivity and specificity must be balanced so that a clinically meaningful outcome of the prediction is obtained. When an intervention is planned, the model must be able to find a reasonable number of the true cases (i.e. \( \frac{2}{3} \) or \( \frac{3}{4} \)). But this cannot be combined with selecting too many false positive cases (low specificity). The model selected in our study, with AUC 0.69, can be regarded as a statistically accurate model which works for a clinically complex population. As illustrated in Table 4, the model must be managed in a clinically relevant context where there is a balance between the number of cases and non-cases selected by the model. We found that a predicted proportion of 40 or 45% of the population is a clinically meaningful reduction of the population to less than half, releasing healthcare resources from the other half with less probable needs. The selected 40 or 45% still contains 62 to 66% of the cases of the whole population. This is a significant enhancement of the probability of reaching the correct target group with a planned proactive intervention. Translated into the reality of a general practitioner (GP) with 2000 listed patients (all ages), he or she would get a list of 50–70 predicted cases. This number of patients that can be screened through and prioritized (from high to low) by the GP who can exclude individuals who are apparently falsely predicted. It should be noted that the positive predicted value for the same proportion of predicted individuals (40%) was 31%. In clinical practice, this is of greater importance than the AUC value itself. If the clinician experiences that 20–30% of predicted individuals are true cases and more than 60% of all cases are detected, our experience is that they find the model to be clinically relevant.

Prediction enables proactive intervention

The meaning of the prediction was to use it in a clinical setting which during the next implementation phase was for clinical (intervention) purposes [21]. In clinical practice, the predicted population was transferred as patient lists to each primary care centre, who could plan and implement proactive interventions (e.g. home visits, telephone support, GP visits). Such interventions given to a poorly defined group of elderly people in a certain age-range or to a “multi-morbidity-group” with low predictive value for hospitalization are likely to direct healthcare resources towards groups that are not in need of them [21]. And interventions for small, specific groups that can be selected manually (newly hospitalized, specific medical diagnosis like heart insufficiency, “above a certain frailty index score”) will miss large groups of elderly in need of healthcare or largely miss the wider care-flows of geriatric hospital care (low sensitivity), see e.g. [13]. Therefore, our healthcare providers now have decided that prediction of risk (for hospitalization) patients in the 75+ population will be introduced into routine primary care where stratified risk-lists will be used for the planning of proactive team-based intervention.

Frailty measures or administrative data?

Using clinical instruments with “frailty” as a predictor for hospital care has practical limitations since it requires a face-to-face meeting and also has poor accuracy for prediction of admission to hospital (AUC 0.52–0.57) [13]. In contrast, predictive models based on administrative healthcare data seem more reliable for the prediction of hospital admissions [18, 19, 25]. In clinical practice, using a digital predictive model combined with a geriatric assessment including a frailty measure is likely to be more useful than either instrument alone [21].

Conclusion

There is strong evidence for the value of geriatric-dedicated assessment, both in hospital and primary care [14, 26,27,28]. Prediction of the target population for these assessments/interventions enables the healthcare provider to direct proactive resources towards a group in greater need which may increase the capacity and cost-effectiveness of the interventions. We provide a clinically useful prediction model with acceptable accuracy for hospital admissions of older possibly frail persons. We indicate how it can be used in a clinical primary care context and how the healthcare can focus its resources to clinically relevant sub-populations. The method and models used can be generalized and implemented in most healthcare systems with electronic healthcare statistics. Prediction of patients at risk for hospitalization may certainly be one important way for the future healthcare system to meet increasing needs for care, but it must be used sensibly in clinical practice.

Availability of data and materials

Due to ethical restrictions we are not allowed to submit our data-file outside our research environment. If other scientists want to explore or validate their own models on our data-set we will certainly be of assistance in doing so upon reasonable request to the corresponding author.

Abbreviations

AUC:

Area under the receiver operating characteristics curve

CI:

Confidence interval

C-statistics:

Concordance-statistics, goodness of fit for binary outcome in a logistic regression

ER:

emergency room

glmnet:

Lasso and Elastic-Net Regularized Generalized Linear Models

GP:

general practitioner

ICD10:

International classification of diseases 10th revision

LR:

Logistic regression

MASS:

Modern Applied Statistics with S

OR:

Odds ratio

PPV:

Positive predictive value

pROC:

Display and Analyze ROC Curves

Ref:

Reference value

rms:

Regression Modeling Strategies

ROC:

Receiver operating characteristics

TRIPOD:

Transparent reporting of multivariable prediction model for individual prognosis or diagnosis

References

  1. WHO: World report on Aging and Health. 2015.

    Google Scholar 

  2. Fitzpatrick AL, Powe NR, Cooper LS, Ives DG, Robbins JA. Barriers to health care access among the elderly and who perceives them. Am J Public Health. 2004;94(10):1788–94.

    Article  Google Scholar 

  3. Terraneo M. Inequities in health care utilization by people aged 50+: evidence from 12 European countries. Soc Sci Med. 2015;126:154–63.

    Article  Google Scholar 

  4. Kingsley D. Aging and health care costs: narrative versus reality. Poverty Public Policy. 2015;7(1):3–21.

    Article  Google Scholar 

  5. Rolden HJ, van Bodegom D, Westendorp RG. Variation in the costs of dying and the role of different health services, socio-demographic characteristics, and preceding health care expenses. Soc Sci Med. 2014;120:110–7.

    Article  Google Scholar 

  6. Banerjee S. Multimorbidity—older adults need health care that can count past one. Lancet Neurol. 2015;385(9967):587–9.

    Article  Google Scholar 

  7. Marengoni A, Angleman S, Melis R, Mangialasche F, Karp A, Garmen A, Meinow B, Fratiglioni L. Aging with multimorbidity: a systematic review of the literature. Ageing Res Rev. 2011;10(4):430–9.

    Article  Google Scholar 

  8. Soong J, Poots AJ, Scott S, Donald K, Bell D. Developing and validating a risk prediction model for acute care based on frailty syndromes. BMJ Open. 2015;5(10):e008457.

    Article  CAS  Google Scholar 

  9. Collerton J, Davies K, Jagger C, Kingston A, Bond J, Eccles MP, Robinson LA, Martin-Ruiz C, von Zglinicki T, OF J, et al. Health and disease in 85 year olds: baseline findings from the Newcastle 85+ cohort study. Bmj. 2009;339:b4904.

    Article  Google Scholar 

  10. Nagga K, Dong HJ, Marcusson J, Skoglund SO, Wressle E. Health-related factors associated with hospitalization for old people: comparisons of elderly aged 85 in a population cohort study. Arch Gerontol Geriatr. 2012;54(2):391–7.

    Article  Google Scholar 

  11. Heppenstall CP, Wilkinson TJ, Hanger HC, Keeling S. Frailty: dominos or deliberation? New Zealand Med J. 2009;122(1299):42–53.

    PubMed  Google Scholar 

  12. Edmans J, Bradshaw L, Gladman JR, Franklin M, Berdunov V, Elliott R, Conroy SP. The identification of seniors at risk (ISAR) score to predict clinical outcomes and health service costs in older people discharged from UK acute medical units. Age Ageing. 2013;42(6):747–53.

    Article  Google Scholar 

  13. Wou F, Gladman JR, Bradshaw L, Franklin M, Edmans J, Conroy SP. The predictive properties of frailty-rating scales in the acute medical unit. Age Ageing. 2013;42(6):776–81.

    Article  Google Scholar 

  14. Ellis G, Whitehead MA, Robinson D, O’Neill D, Langhorne P. Comprehensive geriatric assessment for older adults admitted to hospital: meta-analysis of randomised controlled trials. Bmj. 2011;343:d6553.

    Article  Google Scholar 

  15. Rodriguez-Manas L, Feart C, Mann G, Vina J, Chatterji S, Chodzko-Zajko W, Gonzalez-Colaco Harmand M, Bergman H, Carcaillon L, Nicholson C, et al. Searching for an operational definition of frailty: a Delphi method based consensus statement: the frailty operative definition-consensus conference project. J Gerontol A Biol Sci Med Sci. 2013;68(1):62–7.

    Article  Google Scholar 

  16. Morley JE, Vellas B, van Kan GA, Anker SD, Bauer JM, Bernabei R, Cesari M, Chumlea WC, Doehner W, Evans J, et al. Frailty consensus: a call to action. J Am Med Dir Assoc. 2013;14(6):392–7.

    Article  Google Scholar 

  17. NICE: National Institute for Clinical Excellence. Multimorbidity: clinical assessment and management (NICE clinical guideline 56). 2016.

    Google Scholar 

  18. Haas LR, Takahashi PY, Shah ND, Stroebel RJ, Bernard ME, Finnie DM, Naessens JM. Risk-stratification methods for identifying patients for care coordination. Am J Manag Care. 2013;19(9):725–32.

    PubMed  Google Scholar 

  19. Coleman EA, Wagner EH, Grothaus LC, Hecht J, Savarino J, Buchner DM. Predicting hospitalization and functional decline in older health plan enrollees: are administrative data as accurate as self-report? J Am Geriatr Soc. 1998;46(4):419–25.

    Article  CAS  Google Scholar 

  20. Crane SJ, Tung EE, Hanson GJ, Cha S, Chaudhry R, Takahashi PY. Use of an electronic administrative database to identify older community dwelling adults at high-risk for hospitalization or emergency department visits: the elders risk assessment index. BMC Health Serv Res. 2010;10:338.

    Article  Google Scholar 

  21. Marcusson J, Nord M, Johansson MM, Alwin J, Levin LA, Dannapfel P, Thomas K, Poksinska B, Sverker A, Olaison A, et al. Proactive healthcare for frail elderly persons: study protocol for a prospective controlled primary care intervention in Sweden. BMJ Open. 2019;9(5):e027847.

    Article  Google Scholar 

  22. Collins GS, Reitsma JB, Altman DG, Moons KG. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD statement. J Clin Epidemiol. 2015;68(2):134–43.

    Article  Google Scholar 

  23. Swets JA. Measuring the accuracy of diagnostic systems. Science. 1988;240(4857):1285–93.

    Article  CAS  Google Scholar 

  24. Kansagara D, Englander H, Salanitro A, Kagen D, Theobald C, Freeman M, Kripalani S. Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306(15):1688–98.

    Article  CAS  Google Scholar 

  25. Lopez-Aguila S, Contel JC, Farre J, Campuzano JL, Rajmil L. Predictive model for emergency hospital admission and 6-month readmission. Am J Manag Care. 2011;17(9):e348–57.

    PubMed  Google Scholar 

  26. Lucchetti G, Granero AL. Use of comprehensive geriatric assessment in general practice: results from the ‘Senta Pua’ project in Brazil. Eur J Gen Pract. 2011;17(1):20–7.

    Article  Google Scholar 

  27. Ekdahl AW, Wirehn AB, Alwin J, Jaarsma T, Unosson M, Husberg M, Eckerblad J, Milberg A, Krevers B, Carlsson P. Costs and effects of an ambulatory geriatric unit (the AGe-FIT study): a randomized controlled trial. J Am Med Dir Assoc. 2015;16(6):497–503.

    Article  Google Scholar 

  28. Garrard JW, Cox NJ, Dodds RM, Roberts HC, Sayer AA. Comprehensive geriatric assessment in primary care: a systematic review. Aging Clin Exp Res. 2019;1:1.

    Google Scholar 

Download references

Acknowledgements

We thank Lars Valter, statistician at Region Östergötland, for his initial work in this study.

Funding

This study was performed by the authors while employed by Linköping University and/or the County Council of Östergötland. The work was supported by the County Council of Östergötland and Linköping University from the strategic research fund for ‘Health Care and Welfare’ [Grant number 2016186–14]. The funders had no input or influence on the study. Open access funding provided by Linköping University.

Author information

Authors and Affiliations

Authors

Contributions

JM and JL designed the study. JL performed the statistical analysis. JM, MN, HJD and JL interpreted the data and participated in the framework construction of the manuscript. JM and JL wrote the manuscript. JM, MN, HJD and JL read, improved and approved the final version of the manuscript.

Corresponding author

Correspondence to Jan Marcusson.

Ethics declarations

Ethics approval and consent to participate

The study has been subject to ethical evaluation and was approved by the regional ethical review board in Linköping (Dnr 2016/347–31). Since we used administrative data informed consent was not regarded as necessary by the board.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Marcusson, J., Nord, M., Dong, HJ. et al. Clinically useful prediction of hospital admissions in an older population. BMC Geriatr 20, 95 (2020). https://doi.org/10.1186/s12877-020-1475-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12877-020-1475-6

Keywords