Skip to main content
  • Research article
  • Open access
  • Published:

Development and testing of the Geriatric Care Assessment Practices (G-CAP) survey

Abstract

Background

While the Resident Assessment Instrument-Home Care (RAI-HC) tool was designed to support comprehensive geriatric assessment in home care, it is more often used for service allocation and little is known about how point-of-care providers collect the information they need to plan and provide care. The purpose of this pilot study was to develop and test a survey to explore the geriatric care assessment practices of nurses, occupational therapists (OTs) and physiotherapists (PTs) in home care.

Methods

Literature review and expert consultation informed the development of the Geriatric Care Assessment Practices (G-CAP) survey—a 33 question, online, self-report tool exploring assessment and information-sharing methods, attitudes, knowledge, experience and demographic information. The survey was pilot tested at a single home care agency in Ontario, Canada (N = 27). Test-retest reliability (N = 20) and construct validity were explored.

Results

The subscales of the G-CAP survey showed fair to good test-retest reliability within a population of interdisciplinary home care providers [ICC2 (A,1) (M ICC = 0.58) for continuous items; weighted kappa (M kappa = 0.63) for categorical items]. Statistically significant differences between OT, PT and nurse responses [M t = 3.0; M p = 0.01] and moderate correlations between predicted related items [M r = |0.39|] provide preliminary support for our hypotheses around survey construct validity in this population. Pilot participants indicated that they use their clinical judgment far more often than standardized assessment tools. Client input was indicated to be the most important source of information for goal-setting. Most pilot participants had heard of the RAI-HC; however, few used it. Pilot participants agreed they could use assessment information from others but also said they must conduct their own assessments and only sometimes share and rarely receive information from other providers.

Conclusions

The G-CAP survey shows promise as a measure of the geriatric care assessment practices of interdisciplinary home care providers. Findings from the survey have the potential to inform improvements to integrated care planning. Next steps include making adaptations to the G-CAP survey to further improve the reliability and validity of the tool and a broad administration of the survey in Ontario home care.

Peer Review reports

Background

Older adults want to remain in their own homes as long as possible, and meeting their often compounding physical, functional, cognitive, and psychosocial needs with home care services is a key priority for Canadian health care and health care systems internationally [1,2,3,4]. With the complexity of geriatric home care client needs and the number of different care providers potentially involved, a variety of information and data are required to plan and deliver effective home care services. How, when and who collects this information is very important to the experience of integrated care [5]. To prevent duplication, repetition and frustration, a common assessment approach is preferred over each care provider completing their own assessment. This allows for the development of a comprehensive picture of health care needs, while effectively reducing the demand on older adult home care clients and their family/friend caregivers to repeat their story and health history multiple times to different people [5,6,7].

A well-documented model for health care planning and delivery to older adults with complex health issues is the Comprehensive Geriatric Assessment (CGA). Often thought to be synonymous with specialized geriatric medicine, CGA emphasizes an interdisciplinary and multidimensional approach to assessment that requires all involved health care providers to input information on the functional, social and environmental factors related to an older individual’s health, in conjunction with their diagnoses [8, 9]. International evidence on CGA indicates it has been used in a variety of geriatric care settings across the continuum of care. It has been most well established for use in hospital settings, with studies reporting its ability to predict adverse events [10], lead to improved functional outcomes [11, 12] and decrease morbidity, mortality and hospital admissions [13,14,15]. The use of CGA in primary and community care has also been documented [8, 9]. Trials of CGA combined with multidimensional interventions for community-dwelling older adults have shown improvement in clients’ self-reported ability to complete activities of daily living [16, 17]. CGA has also been used by Mobile Geriatric Assessment Teams to coordinate the provision of targeted multidisciplinary primary care to rural-dwelling, and frail, older adults and has been applied in a preventive context for at-risk community-dwelling older people [18,19,20].

A key element of CGA is that comprehensive assessment and delivery of care are intended to be both integrated and carried out by point-of-care providers, yet there is little evidence on how this is practically achieved in the home care setting. interRAI is a collaboration of international researchers and practitioners in over 35 countries who have developed a suite of comprehensive assessment tools designed to support evidence-informed decision making across the continuum of care [21]. The Resident Assessment Instrument-Home Care (RAI-HC) is a standardized patient assessment tool designed to collect comprehensive patient information for care planning and collaborative decision-making by multiple providers in home care and is used in many countries around the world [22,23,24,25,26,27]. Since 2002, the RAI-HC has been mandated for use in Ontario, Canada to guide service allocation of government-funded home care services [28]. However, care coordinators have 14 days following patient admission to complete RAI-HC assessments and the data are not routinely shared in a useable format or applied by direct-service home care agencies in their delivery of services [28, 29]. Multiple providers from different health care disciplines are often involved in the direct care of older adults, but they work in isolation of each other in individual client homes and therefore individually collect the information they need to provide care [30,31,32].

The way the RAI-HC effectively combines cross-disciplinary information in a standard format makes it ideal to guide CGA practice in Ontario home care, yet the structure and organization of care in this sector may be impeding the opportunity for this tool to be used to its full capacity. Numerous layers of service provision and a lack of role clarity between assessment for service allocation and point-of-care planning in Ontario home care often result in multiple assessments for each client [33].

Nurses, occupational therapists (OTs) and physiotherapists (PTs) are the most common providers conducting patient assessments at the point-of-care in home care [34]. However, to date, their specific assessment and information sharing practices are largely unknown and undocumented. An understanding of the geriatric care assessment practices of individual providers is required to determine how to optimize individual provider contributions to CGA and care planning in this sector. More integrated care planning at the point-of-care has the potential to enhance both the quality and the experience of geriatric home care [35]. Consultation research to address this knowledge gap in home care is challenging as the geographic dispersion of providers and variable care schedules of clients make it difficult to coordinate and conduct face-to-face interviews and focus groups [36]. As an alternate approach, online surveys are an effective method to reach a broader group of people, and allow providers to participate at their convenience [37].

The purpose of this study was to develop and pilot test an online self-report survey tool to explore the geriatric care assessment practices of nurses, OTs and PTs in home care.

Methods

Survey development

The Geriatric Care Assessment Practices (G-CAP) survey was developed using multiple sources of information and guided by a multi-step approach recommended by Streiner et al. [38]:

  1. 1.

    Confirm there is no pre-existing survey tool

A scan of published and grey literature was completed to confirm there were no pre-existing tools for collecting data on the geriatric care assessment practices of point-of-care providers in home care.

  1. 2.

    Determine specificity of the tool

Informed by the background and scope of the project, the researchers determined that the G-CAP survey would focus on the geriatric assessment practices of nurses, OTs and PTs in home care. In accordance with Ontario’s Action Plan for Seniors the geriatric population was defined as any individual aged 65 years and older who was currently receiving home care for any health issue [39].

  1. 3.

    Consider homogeneity of the tool

Researchers hypothesized that the G-CAP survey items would be meaningful at the individual level and therefore would not be added together to generate a single composite score. However, the researchers planned to explore internal consistency (α) between subsets of seemingly related items to determine whether sub-scales existed within the tool. If present, this would indicate groups of effect indicators of sub-constructs related to the overall construct of geriatric assessment [38].

  1. 4.

    Determine the range of items to be included in the scale

As it is preferable in scale development to derive items from multiple sources, previous literature and expert opinion were used to create the item pool [38]. A scan of published and grey literature and current practices in CGA was completed to determine relevant geriatric care domains, standardized assessment tools and other items that should be explored in this type of survey. A group of clinical leaders from various disciplines involved in geriatric home care at a Canadian home care agency were also consulted to help formulate additional items for inclusion in the G-CAP survey.

A first draft of the survey was developed based on the candidate domains and items from the literature and clinical leadership group (see Additional file 1). To further refine the survey tool, a convenience sample of management, education and clinical experts in nursing, occupational therapy and physiotherapy (N = 7) were recruited to participate in key informant interviews where they were asked to review and confirm the candidate list of domains and items to be included in the survey and comment on face validity and content validity (relevance, representativeness and coverage of items). The key informants were also asked to review survey items for any ambiguous wording and comment on the overall length of the tool from a feasibility perspective [38]. All key informants provided written consent to participate in the interviews, which were audio-recorded and transcribed verbatim. Interview transcripts were thematically analyzed by two independent researchers using an inductive coding method and NVivo 10 software [40,41,42]. Each researcher completed a line-by-line analysis of the transcripts to code meaningful units of data, which were then brought together into categories that were labeled according to similarities in meaning. Categories were then compared and organized into themes related to survey tool validity and the adoption of a common assessment approach in home care [41, 42]. After completing their individual analyses, the two researchers came together to compare, contrast and finalize the themes.

  1. 5.

    Scaling the responses

Researchers determined that three different types of response options were needed to match the question types in the refined pool of survey items: 1) perceived frequency; 2) level of agreement; and 3) perceived importance. As these response options are bipolar in nature, they were scaled on a seven-point Likert type scale [38].

Pilot testing

Reliability and validity

Test-retest reliability of the G-CAP survey for use with nurses, OTs and PTs in home care was explored to determine the stability of provider responses about their geriatric care assessment practices over time [38]. Point-of-care providers were asked to participate in the survey on two separate occasions, time one (T1) and time two (T2), which were separated by a period of 2 weeks. To determine if the G-CAP survey measures the intended geriatric care assessment constructs with nurses, OTs and PTs in home care, construct validity was explored. Hypotheses were generated and tested to explore expected differences (discriminative validity) and relationships (divergent and convergent validity) between various attributes of the survey and behaviours of respondents based on discussions with the clinical leadership group [38]. Discriminative construct validity was explored by testing the following hypotheses about differences between nurse, OT and PT responses:

  1. a)

    Rehabilitation therapists (OTs and PTs combined) will use measures of functional status/ activity and rest more often than nurses;

  2. b)

    Nurses will use measures of skin integrity more often than rehabilitation therapists;

  3. c)

    Rehabilitation therapists will assess mobility more often than nurses;

  4. d)

    Rehabilitation therapists will use measures of mobility more often than nurses; and

  5. e)

    OTs will use measures of the patient environment more often than PTs.

Convergent and divergent construct validity was explored by testing the following hypotheses about correlations between survey items:

  1. a)

    Years of experience will be positively correlated with having heard about the RAI-HC;

  2. b)

    Opinions that client assessment requires observation of a client in their home will be positively correlated with the use of observation and interview skills;

  3. c)

    Believing assessment involves conversations with health care providers will be positively correlated with sharing information; and

  4. d)

    Believing that standardized assessment tools are part of geriatric assessment will be negatively correlated with years of experience.

Sample size

To make sure the analysis of test-retest reliability was appropriately powered, the hypothesis testing approach of Kraemer and Thiemann [43] was used to determine an appropriate sample size for G-CAP survey participants. To determine whether an “excellent” reliability of > 0.75 was significantly different from a “poor” reliability of 0.40, a target sample size of 21 participants at T1 and T2 was determined to be appropriate [43,44,45]. This sample size is also sufficient for detecting large correlations (> 0.5) [43, 46].

Recruitment

Point-of-care nurses, OTs and PTs in four geographic areas within a single home care provider agency in Ontario made up the participant pool for this study. Inclusion criteria to participate in the research study included being actively registered with a professional college for one of the three disciplines of interest (nursing, occupational therapy or physiotherapy) in Canada, currently working as a point-of-care care provider in home care in Ontario, Canada and being able to read and write English. A convenience sampling strategy was employed until the target sample size was reached. T1 recruitment began with telephone information sessions between a researcher (JG) and clinical leaders within each of the four geographic areas. Following these information sessions, blast email messages were sent out by clinical leaders to approximately 290 frontline staff requesting their voluntary participation in the survey, providing a link to the online survey in SurveyMonkey and outlining a one-week deadline for participation. All survey participants were provided with necessary study information at the beginning of the survey and consent was implied from their voluntary submission of the survey.

Point-of-care providers who decided to participate in the survey were asked to provide their email addresses at the end of T1 survey completion. Within 1 week, a researcher (JG) emailed each T1 survey participant directly, inviting them to participate in the survey at T2, and providing them with a one-week deadline to do so. This deadline was to ensure that both T1 and T2 survey completion took place within a 14-day period; an optimal time frame for test-retest reliability [38]. Up to two reminder emails were sent to each participant to complete the survey, after which point if they had not participated, it was assumed that they had decided to withdraw from the study. As participants completed the survey electronically, they did not have access to their T1 responses when completing the survey at T2. Participant responses were de-identified after T2 survey completion and each participant was assigned a unique study identification number for the purposes of linking T1 and T2 responses together.

Participants were not paid for their time to complete the survey at T1 or T2, but in recognition of their efforts, they were given the option to enter their name into a draw for one of four gift cards ($50 CAD each) if they completed the survey at both T1 and T2.

Data analysis

Participant survey responses at T1 were used to provide demographic information and to complete construct validity analyses; data from T1 and T2 were used to analyze test-retest reliability. All skipped frequency questions were coded as “never”, and all skipped agreement or importance questions were coded as “neutral”.

Statistical analyses were completed using IBM SPSS 20.0 software, beginning with descriptive statistics [47]. First, internal consistency (α) was explored for groups of related categorical items. Cronbach’s alpha values less than 0.5 were considered unacceptable, between 0.51 and 0.60 were considered poor, between 0.61 and 7.0 were considered acceptable, between 0.71 and 0.90 were considered good and greater than 0.90 were considered excellent [48]. For groups of items with α > 0.61, a single Intra-Class Correlation Coefficient (ICC2, A1) was calculated to determine test-retest reliability for these potential sub-scales of related items [38]. The test-retest reliability of individual categorical items of the G-CAP survey was evaluated using weighted kappa coefficients with quadratic weights. Following the guidelines suggested by Fleiss [44], reliability values below 0.40 were considered poor, between 0.41 and 0.75 were considered fair to good and > 0.75 were considered excellent. Discriminative construct validity was evaluated by comparing mean results using a two-tailed independent samples t-test statistic with a 5% level of significance (α = 0.05) for various hypotheses about differences between disciplines. Convergent and divergent construct validity was tested by calculating Pearson product moment correlations to test various theories about relationships between items in the G-CAP survey. Following the guidelines suggested by Cohen [46], correlations of 0.1 were considered small, of 0.3 were considered moderate, and of 0.5 were considered large.

Results

The G-CAP survey

An initial scan of published and grey literature identified various classifications of care domains relevant to CGA. Table 1 illustrates some examples of these different classifications.

Table 1 Examples of CGA assessment domain classifications reported in the literature

Consideration of these various conceptualizations of CGA domains in terms of their frequency of inclusion in the literature, relevance to home care, research and interdisciplinary practice led to defining a list of initial domains and items to consider for inclusion in the G-CAP survey (see Table 2). Additional academic and grey literature searching and consultation with the clinical leadership group led to refinement of the domains and item pool for inclusion in the survey, including the addition of eight standardized assessment tools, items related to opinions, use and knowledge of the RAI-HC and clinician observation and interview skills (see Table 2).

Table 2 Development of domains and items to be included in the survey

Key informant interviews indicated good face validity for the proposed survey domains and items. All key informants indicated that they believed the survey domains and items appeared to be assessing the geriatric care assessment practices of point-of-care home care providers and felt that the data provided would be valuable. For example, one expert indicated: “This is nice…it is nice. I think it is nice. It will be interesting to see what you are going to get…I think it will be really interesting to see what comes out of it”.

In terms of content validity, key informants were generally supportive of the items included in the survey; however, they suggested a reclassification of some of the survey domains using language they felt would be better understood by home health care providers. Key informants suggested nine additional standardized assessment tools that should be included in the survey (see Table 2).

Clinical expert key informants also discussed various barriers and facilitators to adopting an interdisciplinary common assessment approach in home care (see Table 3). These perceptions of barriers and facilitators informed the inclusion of additional survey items related to attitudes towards assessment, and experiences with interdisciplinary collaboration.

Table 3 Expert opinions regarding barriers and facilitators for moving to common assessment approaches in home care

Experts indicated that the survey was quite long, although they also agreed that all the items were necessary for a thorough exploration of geriatric assessment practices. This prompted the decision to include automatic skip patterns in the online survey so that participants would not spend time responding to questions in an area that was not applicable to their individual geriatric assessment practices.

The final version of the G-CAP survey included 33 questions related to the following five areas: 1) Assessment methods; 2) Attitudes toward assessment; 3) Perceptions of the RAI-HC; 4) Interdisciplinary collaboration; and 5) Demographic information (see Additional file 2).

Demographics

A total of 27 out of ~ 290 health care providers (9.3%) who were emailed the survey, participated at T1. Of these 27 participants, 20 (74.1%) subsequently participated in the survey at T2. Participation took place between September 1, 2014 and November 30, 2014. Participants were mostly female (96.3%) and ranged in age from 23 to 75 years (M = 42.6, SD = 13.8), with an average of 15.6 years of experience in their respective disciplines (SD = 12.7, Range: 1–53). More than half of the participants (55.6%) had been working in home care for at least five years, with one-third (33.3%) having worked in the sector longer than 10 years. Most participants had experience working in other health care sectors, with 70.4% having previously worked in a hospital and 51.9% in long-term care. Most participants (88.9%) indicated that more than half of their home care clients are over the age of 65 years. The characteristics of participants are displayed in Table 4.

Table 4 Characteristics of survey participants

Reliability

ICC2 (A,1) coefficients indicate fair to good test-retest reliability, for most groups of related categorical items and excellent test-retest reliability for one group of related categorical items comprising potential sub-scales of the G-CAP survey within a population of interdisciplinary home care providers (M = 0.58) (see Table 5).

Table 5 Test-retest reliability for groups of related categorical items (potential-subscales)

Mean weighted kappa coefficients indicate fair to good test-retest reliability, on average, for individual categorical items of the G-CAP survey within a population of interdisciplinary home care providers (M kappa = 0.63) (see Table 6).

Table 6 Test-retest reliability for individual categorical items

Validity

Significant two sample t-test statistics (p < 0.05, two-tailed) confirmed the hypothesized differences among nurse, OT and PT responses. Table 7 depicts the t-test scores that support each hypothesis about differences between these groups (M t = 3.0; M p = 0.01), which demonstrates preliminary discriminative construct validity for use of the G-CAP survey with interdisciplinary home health care providers.

Table 7 Discriminative construct validity for use of the G-CAP survey with interdisciplinary home health care providers

Pearson’s product moment correlation coefficients (r) confirmed expected convergent and divergent relationships between survey items and demographic information. Table 8 details the correlation coefficients for each hypothesis tested, with moderate correlation values, on average (M r = |0.39|), which demonstrates preliminary convergent and divergent construct validity for use of the G-CAP survey with interdisciplinary home health care providers.

Table 8 Convergent and divergent construct validity for use of G-CAP survey with home health care providers

Preliminary survey findings

Pilot survey data point to five notable findings regarding the geriatric care assessment practices of nurses, OTs and PTs in home care.

Survey participants use their own clinical observation and interview skills far more often than any standardized tools for geriatric assessment

Participants indicated that they use their own observation and interview skills to assess each of the nine geriatric care domains included in the G-CAP survey (M = 5.6/7, SD = 2.1, Range: 1–7) significantly more often than any standardized assessment tools (M = 1.7/7, SD = 1.6, Range: 1–7). The only standardized assessment tools that participants indicated they used more than “almost never” (> 2 on a 7 point scale), on average, were the Numeric Pain Rating Scale (NPRS), which is used often (M = 5.0/7, SD = 2.4, Range: 1–7), the Verbal Rating Scale for pain, which is used often (M = 5.0/7, SD = 2.4, Range: 1–7) and the Braden Scale for Predicting Pressure Score Risk, which is rarely used (M = 3.4/7, SD = 2.5, Range: 1–7).

The majority of survey participants had heard of the RAI-HC, but do not actually use it

59.3% of the survey participants had previously heard about the RAI-HC, yet, on average, never use it to conduct comprehensive assessments of older home care clients (M = 1.66/7, SD = 1.7, Range: 1–6).

Participants said that the client input is the most important source of information for goal-setting

On average, participants rated input from the client as the most important (M = 6.7/7, SD = 0.45, Range: 6–7) for setting individual client goals. Participants consistently rated the assessment data that others collect (M = 5.9/7, SD = 0.78, Range: 4–7) as well as the professional opinion of other health care providers as less important (M = 5.9/7, SD = 0.80, Range: 4–7) when establishing these goals.

Participants agreed that they could use client information collected by other health care professionals but also agreed that they need to conduct client assessments themselves in order to provide care

While participants strongly agreed that they could use patient information collected by other health care professionals (M = 6.0/7, SD = 0.83, Range: 4–7), they also somewhat agreed that they must conduct client assessments themselves in order to provide care to clients (M = 5.7/7, SD = 1.3, Range: 1–7 on a 7 point scale).

Participants only sometimes share, and rarely receive assessment information from other health care providers

Participants indicated they only sometimes share client information with other health care providers in their discipline (M = 4.2/7, SD = 1.6, Range: 2–7) or outside of their discipline (M = 4.3/7, SD = 1.4, Range: 1–4). While participants sometimes indicated they receive client information from other health care providers in their discipline (M = 4.0/7, SD = 1.4, Range: 1–7), they rarely receive client information from other health care providers outside of their discipline (M = 3.7/7, SD = 1.3, Range: 1–7).

Discussion

Reliability and validity of the geriatric care assessment practices (G-CAP) survey

The G-CAP survey showed fair to good test retest reliability according to the Fleiss criteria [44]. However, it is important to note that these criteria are not specific to ICC, kappa and correlation values and are routinely used to interpret many different types of reliability coefficients in the literature. Therefore, setting reliability cut-off values has been reported to be a fairly arbitrary, although common practice, in the development of novel measurement tools and scales [38, 54]. The author Nunnally [55], however, adds a critical distinction for interpreting psychometric data, based on the purpose of the tool that is being developed. If the tool is being used for research purposes, a reliability coefficient of at least 0.70 is suggested; whereas, tools being used for clinical decision-making should have reliability values of at least 0.90 [55].

As the G-CAP survey was developed specifically for research purposes, there is room for some improvement in test-retest reliability. Participant responses were almost exclusively at the high end of the scale (M = 5.6/7), for the frequency of assessment on each care domain, while their responses for the frequency of utilizing standardized assessment tools was substantially lower (M = 1.7/7). Based on these results, modification of the scales to better distinguish between respective ceiling and floor effects would enhance reliability and the ability to discriminate between more nuanced positive and negative responses [38]. Changing the 7-point Likert type scale to a 5-point Likert type scale is predicted to improve scale reliability and shifting the neutral point of the scale depending on the question is predicted to improve the ability of the scale to discriminate between positive and negative responses [38]. These changes will be made prior to the broad scale administration of the G-CAP survey. Further, as reliability is context-specific, Streiner et al. [38] suggest that it tends to increase when a tool is administered in a more heterogeneous population, which is planned for the next phase of this research when the G-CAP survey is administered to a wider group of home care nurses, OTs and PTs.

Statistically significant differences between OT, PT and nurse responses and moderate correlations between predicted related items of the G-CAP survey tool provide preliminary support for our hypotheses around survey construct validity in this population. Modifications to the tool as described above and broader administration of the G-CAP survey will provide additional opportunities to explore its validity for use with interdisciplinary home care providers.

Exploring the geriatric care assessment practices of nurses, OTs and PTs in home care

Survey participants said they use their clinical observation and interview skills far more than any standardized assessment tools when conducting geriatric assessments at the point-of-care in home care. Previous literature supports the use of clinical judgment in geriatric care, especially in predicting falls risk [56, 57]. One study found that clinical judgment was more accurate than traditionally used falls-risk assessment tools, although less sensitive [58]. Clinical judgment has also been shown to be more effective than standardized assessment in predicting frailty in geriatric patients with cancer [59]. However, standardized assessment has been found to be superior to clinical judgment in other areas of geriatric care, including functional assessment of cognition and ADLs, particularly in predicting more moderate impairments in function that could be targeted with earlier intervention and identifying frailty [60,61,62]. Further exploration of the individual and combined use of standardized tools and clinical judgement is needed to support a CGA type of assessment approach in home care.

Only 59.3% of surveyed home health care providers had previously heard about the RAI-HC. Of these participants, most also indicated that they never use the RAI-HC themselves to collect data about geriatric clients to plan and provide care. These results further illuminate the previously cited disconnect between system level assessment for the purposes of service allocation and point-of-care assessment for the purposes of real-time care delivery in Ontario home care [28]. Further, participants indicated that they use very few other standardized assessment tools, which is potentially indicative that they do not believe there to be a more appropriate alternative to the RAI-HC as a comprehensive standardized assessment at the point-of-care in geriatric home care. This suggests that the perceived potential of the RAI-HC is under-realized and supports the need to further explore the applicability of the RAI-HC in point-of-care assessment to foster real-time care planning.

Survey participants’ opinions regarding the priority of information sources for individual goal-setting indicate input from the client as most important. While participants’ prioritization of client input in goal-setting is aligned to current best practices in shared-decision making and person- and family-centred care for individual interactions between clients and providers, their responses are also reflective of the need to improve interdisciplinary collaboration in geriatric home care [63,64,65,66,67]. Participants indicated they only sometimes share and rarely receive client assessment information from other health care providers and that professional opinion and assessment data from other health care providers are the least important sources of information for client goal-setting. Additionally, 96.2% of participants indicated that they can make use of client data collected by other health care providers, but 85.1% of participants also said that they must conduct the patient assessment themselves to be able to provide care. These findings are in contrast to defined optimal collaborative practice, which Curran [68] says:

…involves the continuous interaction of two or more professionals or disciplines organized into a common effort to solve or explore common issues, with the best possible participation of the patient. Collaborative practice is designed to promote the active participation of each discipline in patient care. It enhances patient- and family-centred goals and values, provides mechanisms for continuous communication among caregivers, optimizes staff participation in clinical decision-making within and across disciplines and fosters respect for disciplinary contributions of all professionals. (p.1)

Further exploration is required into mechanisms for consistent and efficient communication and information-sharing between providers at the point-of-care in home care [69, 70].

Strengths and limitations

This study has several strengths. To our knowledge, this is the first study to systematically explore the geriatric care assessment practices of point-of-care home care providers using survey methods and the G-CAP survey is the first tool of its kind. Another study strength includes the psychometric testing of the G-CAP survey tool, as the results indicate preliminary support for use of the instrument to explore the geriatric assessment practices of interdisciplinary point-of-care providers in home care and therefore may be useful for exploring geriatric care assessment practices of interdisciplinary providers in other geographies and care settings.

This study also has several limitations. First, the data represent a pilot implementation of the G-CAP survey and are only reflective of health care provider views in three disciplines in a single direct-service home care agency. However, the representation of nurses (n = 12), OTs (n = 8) and PTs (n = 7) in the study sample is reflective of the representation of these disciplines within home care in Ontario at the time of data collection. In 2010, there were 125,844 nurses working in Ontario and the community care sector employed 18.4% of these nurses; in 2011, there were 4506 occupational therapists working in Ontario, with 31.1% working in the community sector; and in 2009, there were 6391 physiotherapists working in Ontario, with 14.8% working in the community sector [71,72,73]. Further, the study sample represents four different geographic locations across Ontario. Additional research is required to explore the geriatric care assessment/client observation and information-sharing practices of other relevant disciplines within home care, including social workers, speech-language pathologists and personal support workers.

Another potential limitation of this study is that the data were collected in 2014. However, as the overall structure of the Ontario home care system has remained largely unchanged since that time, the findings are believed to be relevant to current care practices. A recent study on the use of home care assessment data in the home care sector confirms this relevance, indicating that these data are both undervalued and underutilized for evidence-informed decision-making across the sector [74].

Another limitation in the study methods was the low response rate to the G-CAP survey (9.3%), which might be attributed to the busy schedules of point-of-care providers, the length of the survey, lack of personalization in email administration or lack of direct remuneration. Methods were chosen to test an efficient approach for reaching large numbers of health care providers across the province, which is required in the next phase of this work where broad administration of the G-CAP survey will occur. The positive response rate following the researcher’s in-person promotion of the survey within a single OT team indicates the need for additional personalization of the survey experience to boost response rates in future stages [75].

Conclusions

The newly developed G-CAP survey tool shows promise as a measure of the geriatric care assessment practices of interdisciplinary home health care providers.

Preliminary data indicate that point-of-care geriatric assessment in home care by nurses, OTs and PTs is heavily focused on clinical observation and interview skills, with limited use of the RAI-HC or any standardized assessment tools to collect client information. Although there is good intention to set and work towards common person-and family-centred goals by individual providers, limited information-sharing occurs between providers, both within and across disciplines.

Pilot results point to the potential to integrate RAI-HC data collected for service allocation at the system level with clinical judgment and assessment data collected by point-of-care providers to reflect a more CGA-type approach. Next steps include making adaptations to the G-CAP survey to further improve the reliability and validity of the tool and a broad administration of the G-CAP survey across multiple home care service provider agencies in Ontario. Results will be used to inform improvements to integrated geriatric care planning through improved documentation and standardization of clinical assessment practices using validated tools and sharing and using this information across the care team. A more seamless geriatric care planning approach that is consistent with the principles of CGA has the potential to transcend discipline, agency and system boundaries to achieve more efficient and integrated delivery of geriatric home care.

Availability of data and materials

The datasets generated and/or analyzed during the study are not publicly available due to them containing information that could compromise research participant privacy/consent. Anonymized data are available from the corresponding author on reasonable request.

Abbreviations

RAI-HC:

Resident Assessment Instrument-Home Care

OT:

Occupational Therapist

PT:

Physiotherapist

ICC:

Intraclass Correlation Coefficient

CGA:

Comprehensive Geriatric Assessment

T1:

Time One

T2:

Time Two

ADLs:

Activities of Daily Living

IADLs:

Instrumental Activities of Daily Living

References

  1. Better Home Care. Better Home Care in Canada: A National Action Plan. 2016. http://www.thehomecareplan.ca/wp-content/uploads/2016/10/Better-Home-Care-Report-Oct-web.pdf. Accessed 18 July 2018.

    Google Scholar 

  2. Health Council of Canada. Seniors in need, caregivers in distress: What are the home care priorities for seniors in Canada?. 2012 http://www.carp.ca/wpcontent/uploads/2012/04/HCC_HomeCare_2d.pdf. Accessed 18 July 2018.

    Google Scholar 

  3. Gilmour H. Unmet home care needs in Canada. Health Rep. 2018;29(11):3–11.

    PubMed  Google Scholar 

  4. World Health Organization. The growing need for home health care for the elderly. 2015 https://applications.emro.who.int/dsaf/EMROPUB_2015_EN_1901.pdf?ua=1&ua=1. Accessed 25 May 2020.

    Google Scholar 

  5. Leatt P, Pink GH, Guerriere M. Towards a Canadian model of integrated healthcare. Healthc Pap. 2000;1(2):13–35.

    Article  CAS  PubMed  Google Scholar 

  6. Baranek P. Integration of care: summary report analysing the responses of all provider groups [background report]. 2010. http://www.changefoundation.ca/site/wp.../integration_of_care_report1.pdf. Accessed 18 July 2018.

    Google Scholar 

  7. MacAdam M. Moving toward health service integration: provincial Progress in system change for seniors. 2009. http://globalag.igc.org/health/world/2009/canada.pdf. Accessed 20 July 2018.

    Google Scholar 

  8. Heckman G, Gray LC, Hirdes JP. Addressing health care needs for frail seniors in Canada: the role of interRAI instruments. Can Geriatr Soc J CME. 2013;3(1):7–16.

    Google Scholar 

  9. Welsh TJ, Gordon AL, Gladman JR. Comprehensive geriatric assessment--a guide for the non-specialist. Int J Clin Pract. 2014. https://doi.org/10.1111/ijcp.12313.

  10. Avelino-Silva TJ, Farfel JM, Curiati JA, Amaral JR, Campora F, Jacob-Filho W. Comprehensive geriatric assessment predicts mortality and adverse outcomes in hospitalized older adults. BMC Geriatr. 2014. https://doi.org/10.1186/1471-2318-14-129.

  11. Baztan JJ, Suarez-Garcia FM, Lopez-Arrieta J, Rodriguez-Manas L, Rodriguez-Artalejo F. Effectiveness of acute geriatric units on functional decline, living at home, and case fatality among older patients admitted to hospital for acute medical disorders: meta-analysis. BMJ Open. 2009. https://doi.org/10.1136/bmj.b50.

  12. Ellis G, Whitehead MA, O'Neill D, Langhorne P, Robinson D. Comprehensive geriatric assessment for older adults admitted to hospital. Cochrane Database Syst Rev. 2011. https://doi.org/10.1002/14651858.CD006211.pub2.

  13. Caplan GA, Williams AJ, Daly B, Abraham K. A randomized, controlled trial of comprehensive geriatric assessment and multidisciplinary intervention after discharge of elderly from the emergency department--the DEED II study. J Am Geriatr Soc. 2004. https://doi.org/10.1111/j.1532-5415.2004.52401.x.

  14. Cohen HJ, Feussner JR, Weinberger M, Carnes M, Hamdy RC, Hsieh F, et al. A controlled trial of inpatient and outpatient geriatric evaluation and management. N Engl J Med. 2002. https://doi.org/10.1056/NEJMsa010285.

  15. Van Craen K, Braes T, Wellens N, Denhaerynck K, Flamaing J, Moons P, et al. The effectiveness of inpatient geriatric evaluation and management units: a systematic review and meta-analysis. J Am Geriatr Soc. 2010. https://doi.org/10.1111/j.1532-5415.2009.02621.x.

  16. Boult C, Boult LB, Morishita L, Dowd B, Kane RL, Urdangarin C. F. a randomized clinical trial of outpatient geriatric evaluation and management. J Am Geriatr Soc. 2001;49(4):351–9.

    Article  CAS  PubMed  Google Scholar 

  17. Melis RJ, van Eijken MI, Teerenstra S, van Achterberg T, Parker SG, Borm GF, et al. A randomized study of a multidisciplinary program to intervene on geriatric syndromes in vulnerable older people who live at home (Dutch EASYcare study). J Gerontol A Biol Sci Med Sci. 2008;63(3):283–90.

    Article  PubMed  Google Scholar 

  18. Beauchet O, Launay CP, Merjagnan C, Kabeshova A, Annweiler C. Quantified self and comprehensive geriatric assessment: older adults are able to evaluate their own health and functional status. PLoS One. 2014. https://doi.org/10.1371/journal.pone.0100636.

  19. Rockwood K, Howlett S, Stadnyk K, Carver D, Powell C, Stolee P. Responsiveness of goal attainment scaling in a randomized controlled trial of comprehensive geriatric assessment. J Clin Epidemiol. 2003;56(8):736–43.

    Article  PubMed  Google Scholar 

  20. Suijker JJ, Buurman BM, ter Riet G, van Rijn M, de Haan RJ, de Rooij SE, Moll van Charante EP. Comprehensive geriatric assessment, multifactorial interventions and nurse-led care coordination to prevent functional decline in community-dwelling older persons: protocol of a cluster randomized trial. BMC Health Serv Res. 2012;doi:https://doi.org/10.1186/1472-6963-12-85.

  21. The interRAI organization: who we are. 2020. https://www.interrai.org/organization. Accessed 25 May 2020.

  22. interRAI. Home Care. 2012. http://www.interrai.org/home-care.html. Accessed 20 July 2018.

  23. Parsons M, Senior H, Mei-Hu Chen X, Jacobs S, Parsons J, Sheridan N, Kenealy T. Assessment without action; a randomised evaluation of the interRAI home care compared to a national assessment tool on identification of needs and service provision for older people in New Zealand. Health Soc Care Community. 2013. https://doi.org/10.1111/hsc.12045.

  24. Stolle C, Wolter A, Roth G, Rothgang H. Effects of the resident assessment instrument in home care settings. Z Gerontol Geriat. 2012;45:315–22.

    Article  CAS  Google Scholar 

  25. Landi F, Tua E, Onder G, Carrara B, Sgadari A, Rinaldi C, et al. Minimum data set for home care: a valid instrument to assess frail older people living in the community. Med Care. 2000;38(12):1184–90.

    Article  CAS  PubMed  Google Scholar 

  26. June KJ, Lee JY, Yoon JL. Effects of case management using resident assessment instrument—home care in home health services for older people. J Korean Acad Nurs. 2009;39:366–75.

    Article  PubMed  Google Scholar 

  27. Stolee P. Measuring outcomes of multidimensional interventions. In: Brocklehurst's textbook of geriatric medicine and gerontology. Philadelphia: Elsevier; 2010. p. 1–17.

    Google Scholar 

  28. Guthrie DM, Pitman R, Fletcher PC, Hirdes JP, Stolee P, Poss JW, et al. Data sharing between home care professionals: a feasibility study using the RAI home care instrument. BMC Geriatr. 2014. https://doi.org/10.1186/1471-2318-14-81.

  29. Doran DM, Hirdes JP, Blais R, Baker GR, Poss JW, Li X, et al. Adverse events among Ontario home care clients associated with emergency room visit or hospitalization: a retrospective cohort study. BMC Health Serv Res. 2013. https://doi.org/10.1186/1472-6963-13-227.

  30. Ontario Health Coalition. 2011. Still waiting: An assessment of Ontario's home care system after two decades of restructuring. http://www.ontariohealthcoalition.ca/wp-content/uploads/Full-Report-April-4-2011.pdf. Accessed 24 July 2018.

    Google Scholar 

  31. Parsons JG, Parsons MJ. The effect of a designated tool on person-centred goal identification and service planning among older people receiving homecare in New Zealand. Health Soc Care Community. 2012. https://doi.org/10.1111/j.1365-2524.2012.01081.x.

  32. Toscan J, Mairs K, Hinton S, Stolee P. Integrated transitional care: patient, informal caregiver and health care provider perspectives on care transitions for older persons with hip fracture. Int J Integr Care. 2012;12:e13.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Health Quality Ontario. 2012. Quality Monitor: 2012 Report on Ontario's health care system. http://www.hqontario.ca/portals/0/documents/pr/qmonitor-full-report-2012-en.pdf. Accessed 20 July 2018.

    Google Scholar 

  34. Ontario Home Care Association. 2013. Home care in Ontario-facts and figures. http://www.homecareontario.ca/home-care-services/facts-figures/publiclyfundedhomecare. Accessed 24 July 2018.

    Google Scholar 

  35. Wodchis WP, Dixon A, Anderson GM, Goodwin N. Integrating care for older people with complex needs: key insights and lessons from a seven-country cross-case analysis. Int J Integr Care. 2012. https://doi.org/10.5334/ijic.2249.

  36. Ellenbecker CH, Samia L, Cushman MJ, Alster K. Patient safety and quality in home health care. In: Hughes RG, editor. Patient safety and quality: an evidence based handbook for nurses. Rockville: Agency for Healthcare Research and Quality; 2008.

    Google Scholar 

  37. Dillman D. Mail and internet surveys: the tailored design method. 2nd ed. New York: Wiley; 2000.

    Google Scholar 

  38. Streiner DL, Norman GR. Health measurement scales: a practical guide to their development and use. New York: Oxford University Press; 2008.

    Book  Google Scholar 

  39. The Ontario Seniors' Secretariat. 2013. Independence, Activity, and Good Health: Ontario's Action Plan for Seniors. http://www.homecareontario.ca/docs/default-source/publications-mo/ontarioseniorsactionplan-en.pdf?sfvrsn=10. Accessed 24 July 2018.

    Google Scholar 

  40. Lofland J, Snow DA, Anderson L, Lofland LH. Analyzing social settings: a guide to qualitative observation and analysis. 4th ed. Belmont: Wadsworth Publishing; 2006.

    Google Scholar 

  41. Pope C, Ziebland S, Mays N. Qualitative research in health care. Analysing qualitative data. Bmj. 2000;320(7227):114–6.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  42. QSR International Pty Ltd. 2012. Nvivo qualitative data analysis software version 10: QSR international.

    Google Scholar 

  43. Kraemer HC, Thiemann S. How many subjects? Statistical power analysis in research. Thousand Oaks: Sage Publications Inc; 1987.

    Google Scholar 

  44. Fleiss JL. Design and analysis of clinical experiments. New York: Willey; 1986.

    Google Scholar 

  45. Fleiss JL, Cohen J. The equivalence of weighted kappa and the Intraclass correlation coefficient as measures of reliability. Educ Psychol Meas. 1973. https://doi.org/10.1177/001316447303300309.

  46. Cohen J. Statistical power analysis for the behavioural sciences. New York: Routledge Academic; 1988.

    Google Scholar 

  47. IBM. SPSS for windows Version 16.0. Chicago: SPSS, Inc; 2007.

    Google Scholar 

  48. Kline P. The handbook of psychological testing. 2nd ed. New York: Routledge; 2000.

    Google Scholar 

  49. Elsawy B, Higgins KE. The geriatric assessment. Am Fam Physician. 2011;83(1):48–56.

    PubMed  Google Scholar 

  50. Cobb EL, Duthie EH, Murphy JB. Geriatrics Review Syllabus: A Core Curriculum in Geriatric Medicine. 5th ed. Malden: Blackwell Publishing for the American Geriatrics Society; 2002.

    Google Scholar 

  51. Stauder R, Moser K, Holzner B, Sperner-Unterweger B, Kemmler G. Six independent domains are defined by geriatric assessment in elderly cancer patients. Crit Rev Oncol Hematol. 2010. https://doi.org/10.1016/j.critrevonc.2009.04.010.

  52. Fleming J, Scamehorn K. FEEBLE FALLERS ARE FRAIL (acronym for domains in comprehensive geriatric assessment). POGOe Portal Geriatr Online Educ. 2006.

  53. Gallo J, Paveza G, Reichel W, Fulmer T, Kazer MW, Shea JM, Guttman C. Handbook of geriatric assessment. 4th ed. Sudbury: Jones and Bartlett; 2006.

    Google Scholar 

  54. Van Ness PH, Towle VR, Juthani-Mehta M. Testing measurement reliability in older populations: methods for informed discrimination in instrument selection and application. J Aging Health. 2008. https://doi.org/10.1177/0898264307310448.

  55. Nunnally JC. Psychometric theory. 2nd ed. New York: McGraw-Hill; 1978.

    Google Scholar 

  56. Milisen K, Coussement J, Flamaing J, Vlaeyen E, Schwendimann R, Dejaeger E, et al. Fall prediction according to nurses' clinical judgment: differences between medical, surgical, and geriatric wards. J Am Geriatr Soc. 2012. https://doi.org/10.1111/j.1532-5415.2012.03957.x.

  57. Turkoski B, Pierce LL, Schreck S, Salter J, Radziewicz R, Guhde J, Brady R. Clinical nursing judgment related to reducing the incidence of falls by elderly patients. Rehabil Nurs. 1997;22(3):124–30.

    Article  CAS  PubMed  Google Scholar 

  58. Vassallo M, Poynter L, Sharma JC, Kwan J, Allen SC. Fall risk-assessment tools compared with clinical judgment: an evaluation in a rehabilitation ward. Age Ageing. 2008. https://doi.org/10.1093/ageing/afn062.

  59. Smets IH, Kempen GI, Janssen-Heijnen ML, Deckx L, Buntinx FJ, van den Akker M. Four screening instruments for frailty in older patients with and without cancer: a diagnostic study. BMC Geriatr. 2014. https://doi.org/10.1186/1471-2318-14-26.

  60. Kirkhus L, Šaltytė Benth J, Rostoft S, Grønberg BH, Hjermstad MJ, Selbæk G, et al. Geriatric assessment is superior to oncologists’ clinical judgement in identifying frailty. Br J Cancer. 2017. https://doi.org/10.1038/bjc.2017.202.

  61. Pinholt EM, Kroenke K, Hanley JF, Kussman MJ, Twyman PL, Carpenter JL. Functional assessment of the elderly: a comparison of standard instruments with clinical judgment. Arch Intern Med. 1987. https://doi.org/10.1001/archinte.1987.00370030088017.

  62. Worrall G, Chaulk PC, Briffett E. Functional assessment verses clinical Judgement in predicting outcomes of community-based continuing care: a prospective study. Can Fam Physician. 1996;42:2360–7.

    CAS  PubMed  PubMed Central  Google Scholar 

  63. Butterworth JE, Campbell JL. Older patients and their GPs: shared decision making in enhancing trust. Br J Gen Prac. 2014. https://doi.org/10.3399/bjgp14X682297.

  64. Kuluski K, Nelson M, Tracy CS, Alloway CA, Shorrock C, Shearkhani S, Upshur REG. Experience of care as a critical component of health system performance measurement: recommendations for moving forward. HealthcarePapers. 2017;17(2):8–20.

    Article  PubMed  Google Scholar 

  65. Lally J, Tullo E. Engaging older people in decisions about their healthcare: the case for shared decision making. Rev Clin Gerontol. 2012. https://doi.org/10.1017/S0959259811000281.

  66. Registered Nurses Association of Ontario. Person- and Family- Centred Care Clinical Best Practice Guidelines. 2015. https://rnao.ca/sites/rnao-a/files/FINAL_Web_Version_0.pdf. Accessed 24 July 2018.

    Google Scholar 

  67. Schulman-Green DJ, Naik AD, Bradley EH, McCorkle R, Bogardus ST. Goal setting as a shared decision making strategy among clinicians and their older patients. Patient Educ Couns. 2006. https://doi.org/10.1016/j.pec.2005.09.010.

  68. Curran V. Interprofessional education for collaborative patient-Centred practice: research synthesis paper. 2004. https://research.library.mun.ca/154/1/Interprofessional_Education_for_collaborative_patient_centred_practice.pdf. Accessed 26 July 2018.

    Google Scholar 

  69. Dahlke S, Steil K, Freund-Heritage R, Colborne M, Labonte S, Wagg A. Older people and their families' perceptions about their experiences with interprofessional teams. Nurs Open. 2018. https://doi.org/10.1002/nop2.123.

  70. Lindberg B, Nilsson C, Zotterman D, Soderberg S, Skar L. Using information and communication Technology in Home Care for communication between patients, family members, and healthcare professionals: a systematic review. Int J Telemed Appl. 2013. https://doi.org/10.1155/2013/461829.

  71. Canadian Institute for Health Information. Occupational Therapists in Canada, 2011--National and Jurisdictional Highlights. 2011. .www.cihi.ca/CIHIxportal/pdf/internet/OT2011_HIGHLIGHTS_PROFILES_EN Accessed 24 July 2018.

    Google Scholar 

  72. Health Human Resources. Physiotherapists in Canada. 2009. www.cptbc.org/pdf/CIHReport.PTinCanada.2009.pdf. Accessed 24 July 2018.

    Google Scholar 

  73. Ontario Home Care Association. Home care nursing in Ontario. 2011. http://www.homecareontario.ca/docs/default-source/HHR/hc-nsg-in-ontario---mar-2011---final-rev.pdf?sfvrsn=8. Accessed 24 July 2018.

    Google Scholar 

  74. Elliott J, Gordon A, Tong CE, Stolee P. “We’ve got the home care data, what do we do with it”?: understanding data use in decision-making and quality improvement. BMC Health Serv Res. 2020;20:251. https://doi.org/10.1186/s12913-020-5018-9.

    Article  PubMed  PubMed Central  Google Scholar 

  75. Dillman DA. Response rate and measurement differences in mixed-mode surveys using mail, telephone, interactive voice response (IVR) and the internet. Soc Sci Res. 2009;38:1–18.

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to acknowledge Dr. Kerry Byrne, Dr. Samantha Meyer, and the members of the GHS Research Group and SE Research Centre for their valuable contributions to this research study.

Funding

SE Health provided funding for this study. Dr. Giosa was also supported by: a) A Canadian Institutes for Health Research Canada Graduate Scholarship: Doctoral Award; and b) A President’s Graduate Scholarship from the University of Waterloo. The funding bodies had no role in study design, data collection, data analysis, interpretation or writing of the manuscript.

Author information

Authors and Affiliations

Authors

Contributions

JG designed the study, collected and analyzed the data and led the writing of the manuscript. PS designed the study, assisted with analyzing and interpreting the data and with the writing of the manuscript. PH assisted with analyzing and interpreting the data and with writing of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Justine L. Giosa.

Ethics declarations

Ethics approval and consent to participate

Ethics clearance for this research study was granted by the University of Waterloo Office of Research Ethics (ORE #19586).

All key informants provided written consent to participate in the interviews, which were audio-recorded and transcribed verbatim. All survey participants were provided with necessary study information at the beginning of the survey and consent was implied from their voluntary submission of the survey.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

Draft G-CAP Survey for Clinical Expert Key Informant Consultation.

Additional file 2.

Pilot Version of the G-CAP Survey

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Giosa, J.L., Stolee, P. & Holyoke, P. Development and testing of the Geriatric Care Assessment Practices (G-CAP) survey. BMC Geriatr 21, 220 (2021). https://doi.org/10.1186/s12877-021-02073-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12877-021-02073-5

Keywords