Abstract

The gap between need and access to mental health care is widest in low-resource settings. Health systems in these contexts devote few resources to expanding mental health care, and it is missing from the agenda of most global health donors. This is partially explained by the paucity of data regarding the nature and extent of the mental health burden in these settings, so accurate and comparable measurement is essential to advocating for, developing, and implementing appropriate policies and services. Inaccurate estimation of mental illness prevalence, and misunderstandings regarding its etiologies and expressions, are associated with unnecessary costs to health systems and people living with mental illness. This paper presents a selective literature review of the challenges associated with mental health measurement in these settings globally, presents several case studies, and suggests three strategies for researchers to improve their assessments: utilize qualitative data, conduct cognitive interviews and train research teams with a focus on inter-rater reliability. These three strategies presented, added to the complement of tools used by mental health researchers in low-resource settings, will enable more researchers to conduct culturally valid work, improve the quality of data available, and assist in narrowing the treatment gap.

Introduction

Worldwide, only a small minority of those in need receive even basic treatment for mental illness. The gap between need and access to mental health care is widest in low-resource settings.1 Health systems devote few resources to the expansion of mental health care in these areas, and mental health is missing from the agenda of most global health donors.2 This situation may be partially explained by the paucity of data regarding the burden of mental illness in these settings, which results from both low quality data and the substantial costs of collecting, sharing, and utilizing existing data.3,4 Optimizing mental health indicators for health information management systems, and managing these systems, necessitates an amount of time and effort that is rarely available to stakeholders.4 Furthermore, much of the available data has been collected with an emphasis on comparability across contexts over the accuracy in context, limiting the applicability of the measures to the populations surveyed. The accurate and comparable measurement of the burden of mental illness is essential to advocating for, developing, and implementing adequate and appropriate policies and services for mental health. Both over- and under-estimation of the prevalence of mental illness, as well as misunderstandings regarding its etiologies and expressions, are associated with unnecessary costs to the health system.5,6 Despite the vital importance of valid, comparable psychometric assessment in low-resource settings, only a fraction of mental health work attempts to adapt and utilize culturally-valid measures in these contexts.

Within high-resource settings, it is common to have a gold standard method of psychometric assessment that is valid and reliable, but even while using validated instruments, researchers need to help participants become familiar with items, terminologies, and response choices. Thus, it is not entirely unexpected that studies have demonstrated that simple translation and back-translation of assessments often misses key aspects of the targeted constructs when using them in contexts that differ from where the assessments were developed.7 In addition, changing the mode of administration from self-report to interview in order to gather mental health information from participants with limited literacy can introduce a host of new problems, including poor inter-interviewer reliability and difficulties with the Likert scale response choices.

This study surveys the challenges with mental health measurement in limited resource contexts globally and aims to provide succinct strategies for groups conducting this work going forward. The challenges and solutions discussed here are a selective review of studies and approaches that successfully validated assessments across cultures (please see Ali et al.8 for a more comprehensive review of validated measures in global work).

We identified relevant studies by performing a search on PubMed using the terms ‘mental health’ AND ‘low income’ OR ‘cross-cultural’ AND ‘assessment.’ We included studies conducting mental health assessment in cross-cultural or low-resource settings, or promoting methods to adapt validated diagnostic and screening tools across contexts. We further snowballed our sampling by investigating the reference lists of relevant studies, and by contacting our colleagues in the field for relevant case studies. These case studies are provided to more fully illustrate common challenges that arise when working in low-resource settings. The goal in conducting this review and providing strategies for improving assessments is to encourage and enable a greater variety of researchers to utilize culturally appropriate psychometric assessments when working globally.

Challenges in global mental health assessment

Mental health problems typically do not have a ‘blood test’ to objectively identify presence of a disorder. Thus, to assess mental health reliably and with high validity in a global limited resource context, one needs a culturally appropriate and valid measure, and a trained technician to administer the measure to ensure reliability.

To help ensure validity of a mental health measure for use in a low-resource setting, one must have a thorough understanding of the shape of psychosocial distress in the population of interest. In high-resource contexts, researchers typically construct a mental health diagnostic or symptom severity measure derived from diagnostic categories relevant in high-resource countries (e.g., Diagnostic and Statistical Manual of Mental Disorders [DSM]9, or International Classification of Diseases [ICD]10). One performs classical and modern psychometric and statistical tests with the measure after several administrations: factor analyses, Cronbach's alpha, test-retest reliability, item response theory-based analyses, or sensitivity and specificity analyses.1113 The results of these tests inform researchers as to the measure's utility for their question of interest.

Ideally, these steps would be carried out similarly in all contexts, but there exist a variety of challenges for researchers working internationally to overcome. For one, few global contexts have formal or widespread definitions of ‘diagnostic categories’ upon which to base surveys or measures.14 Without this reference, researchers need to elucidate how experiences are locally defined and understood qualitatively. This work is difficult because diagnostic categories act both as a clinical measure and as a shared construct for individuals to define their experiences in daily life.14 For example, an individual may understand their lack of motivation or anhedonia as a ‘depressive’ psychological symptom whereas others may interpret their symptoms as physiological concerns.15 Without a larger, shared social construct, researchers and study participants have to conduct more groundwork to find consensus on how to interpret their psychological and physiological experience (see Abubakar et al.16 for a recent example of this). Further, the lack of a locally valid psychometric assessment is often concurrent with a lack of formally or rigorously trained ‘mental health experts’ in low- and middle-income countries.17 This makes it more difficult to select informants and local experts to develop a local ‘gold standard’ against which to compare previously constructed assessments.18

Despite these challenges, and others discussed below, a number of authors have successfully created novel assessments and adapted others from high-resource contexts. These authors provide a number of guides through their publications, which provide the backbone for the ‘ideal’ strategy provided below. For example, Kohrt, Kaiser, and colleagues validated depression and PTSD assessments for use with children in Nepal as well as depression and anxiety for work in Haiti.19,20 Kohrt et al.19 suggests six questions and steps around which development of novel or adapted assessments, and similar strategies are outlined and performed by Bolton, Bass, Patel and others.16,2125 However, while these guides are indispensable, they necessitate a wealth of qualitative data built from years, and sometimes decades, of experience with the target population. This type of work not only requires more specialized training and expertise but a high level of consistent funding and investigation. Relatively little has been written on how to approximate these ideal methods with more common demands on time and funding.

The importance of developing and validating measures and training research teams for varying contexts is not unique to mental health and other aspects of health have similar or parallel challenges. Few physical disorders have objectively reliable measures for diagnosis; many health care practitioners rely on patient reports, detailed histories and technician-administered tests before making diagnoses. For example, spirometry is often used to make a diagnosis of chronic obstructive pulmonary disease, but this test is also highly dependent on a skilled technician to administer it. Without rigorous training, the results of spirometry would be, and sometimes still are, highly unreliable across technicians. Additionally, many measures of health in high-resource contexts considered to be objective and universal, such as ‘healthy’ body mass index or hormonal profiles, do not directly apply to many low-resource populations. In Bangladeshi women, for example, average progesterone levels would be considered pathological in the US,26 but they have much higher rates of fertility.27 This is despite the fact that becoming pregnant is largely contingent on having sufficient progesterone levels. Given the difficulties with ensuring accurate measurement and pathological assessment across cultures in readily quantifiable, physiological characteristics, alongside the incredible diversity of cultural and psychosocial expression globally, it is not surprising that psychometric assessment across cultural contexts is also rife with difficulty.

Case examples

As mentioned above, the ideal approach to psychometric validation requires extensive study and significant funding, and it is therefore difficult to implement in low-resource settings. There is also an urgency to develop mental health services in these settings, and thus funding often prioritizes service provision over measurement development. It is therefore understandable that mental health measures from high-resource countries are often used in low-resource settings without significant modification. However, as several researchers have detailed, total reliance on high-resource setting conceptualizations of mental illnesses and non-attendance to local idioms of distress are severely problematic.19,2831 Users of high-resource country assessments in low-resource contexts must take into account cultural taboos that may prevent participants from willingly sharing information. For example, women from conservative cultures, such as a population recently studied in South India, may be hesitant to share information about sexual abuse or may have different definitions of what constitutes sexual abuse.3234 Further, people in some low-resource settings may be reluctant to share information about mental health as certain terms are heavily stigmatized (e.g., depression).30 Additionally, psychometric validation approaches often require ‘gold standard’ instruments and cutoff scores. In low-resource settings and in low-income countries, these gold standards rarely exist.18 We outline a few real-world case examples to illustrate how these challenges can play out through common situations in global health work (see Box 1).

Box 1.
Real-world case examples illustrating common situations in global health work

Case 1.

Responding to need for mental health related research in a country in sub-Saharan Africa, a team of researchers with established ties in local antenatal care settings had nurses administer the Patient Health Questionnaire 9 item (PHQ-9)35 to pregnant women enrolled in Prevention of Mother to Child Transmission of HIV services (PMTCT). The tool had been validated in another sub-Saharan country (Uganda), and thus, the team assumed that the meaning of items would apply across the two countries. The team had no extra funding or resources to validate the PHQ-9 for the country context; however, the questionnaire was reviewed with study nurses and language subsequently modified. The PHQ-9 was included in a battery of questionnaires to be administered as an interview by one to two nurses at each of nine study clinics, who had training in administration of mental health instruments. The team received responses from 406 pregnant women living with HIV. Ninety-seven percent (n=394) of participants had a total score of 4 or below (negligible depressive symptoms), 2.5% (n=10) had a total score of 5 to 9 (indicating mild symptoms), and 0.3% (n=1) had a total score of 10 or above (moderate symptoms). In other sub-Saharan African countries, perinatal depression has an overall prevalence of approximately 13–17%, and assumed to be higher among HIV-infected pregnant women.36,37 The investigators also noted that one of the nine sites acted as an outlier, with an average total score of 7 compared to 0 to 2.2 at the other sites. In this case, it was difficult to tell if the site with a higher average score had an external reason for high rate of depression (e.g., in a particularly impoverished area) or if the interviewer method of administration (e.g., rapport, flexibility with terminologies used; depression-related stigmatizing attitudes held by the interviewer that may have affected administration) affected scores at the site.

Case 2.

A student goes to the field to conduct thesis-related work, and begins to interview participants in an HIV clinic in another sub-Saharan African country. She administers the PHQ-9 in the local language (which she speaks with fluency) as an interview because she anticipates that most of her participants have limited literacy, requiring Likert-type responses from participants. Participants had difficulty indicating which number on the response scale represented their experience, and instead they tell stories of their experience with depressive symptoms. The student calls her mentor in the United States for guidance.

Case 3.

In a country in South Asia, a team of researchers implemented a mental health intervention in three non-psychiatric hospital based clinics. In order to recruit participants (patients attending clinics with moderate depressive symptoms) for qualitative formative work, interviewers needed to use the PHQ-9 to determine eligibility. After several weeks of screening, interviewers had trouble finding participants with a score of 10 or above on the scale. Five percent screened positive in a context where previous studies with the same population showed 20% had moderate depressive symptoms in the same population using the same scale). After undergoing training that focused on role playing, rapport building, and flexibility of terminologies used, the interviewers subsequently found approximately 15% of those screened in their clinics were eligible.

Participants often prefer to tell of their experience, rather than distill their experience to a number on a Likert scale. In fact, the need for an experience to be distilled into a number is generated from the research world, a world that is entirely unfamiliar to many living in limited resource contexts. While answering questions on a Likert scale is common in the high-resource world at the doctor's office, for telemarketers, and elsewhere, the Likert scale is an odd, foreign concept for many in limited resource contexts. Some orientation to the task is needed for the participant, and training is essential for the interviewer.

Illiteracy or low levels of reading proficiency are common problems for researchers working in low-resource settings. This limits both common understandings of symptoms and Likert scales, as well as the ability to gather self-report information. Some successful strategies have included using symbols instead of numbers for Likert scales (e.g., spectrum from light to dark; depictions of increasing levels of physical burden to indicate distress).38 Another strategy that allows researchers to collect self-report data is the use of electronic interview technologies like Audio-Computer Assisted Self-Interview (ACASI), in which participants take an assessment on a computer; items and response choices are read aloud by the computer.39 A positive aspect of this strategy is that it allows a uniform protocol and self-report to be administered to a large number of participants. However, a potential negative is that it requires the manipulation of an electronic interface, an experience that is not widespread in low-resource communities. Furthermore, a participant has no way to clarify questions with an interviewer in order to increase their understanding of the items. Another negative is that the computer cannot adjust terminologies based on local dialects. People often use slightly different terms for depression, even within the same city or town.40

Lastly, many researchers have encountered issues with participants in low-resource contexts having difficulty with answering items that reflect abstract, time-bound responses. For example, participants in some contexts prefer items without the 1 month or 2 week time frame whereas others have preferred items that refer to an event, such as a festival time or holiday.41 In cases such as these, many have turned to more concrete measures such as the Beck Depression Inventory (BDI) to assess depressive symptoms, but use of this scale may introduce additional problems, as the BDI was developed as a clinical rather than a research tool. Also, response choices change for each item of the BDI, which may make items difficult to follow and cause cognitive strain on the participants.16

Strategies for the way forward

Below, we outline three solutions that allow researchers to pre-emptively address problems with their surveys and, more importantly, find issues that would not have arisen otherwise (e.g., construct validity, test/retest reliability). We first outline an approach focused on expanding existing, validated assessments to include appropriate, local psychosocial constructs as well as improving local implementation. We then describe another approach to incorporate cognitive interviews to identify participants’ perspectives on measurement items and ensure agreement in items between researchers and participants. Lastly, we outline an efficient method for improving the reliability of measures through the careful training of research teams. Ultimately, researchers should make the choice between strategies based on their time and funding as well as their knowledge of the target population through previous and ongoing work. All of these potential solutions outlined here can be and have been applied flexibly and in tandem with each other in previous studies.21,42

Design, Implementation, Monitoring and Evaluation (DIME)

The DIME process was developed by the Applied Mental Health Research Group at Johns Hopkins Bloomberg School of Public Health for long-term investigations of mental health in low-resource settings.21,42 This strategy focuses upon translating and building an assessment from a qualitative understanding of the low-resource target population (see Kohrt et al.19 for a similar process at work). First, the researchers clarify the purpose of their assessment. Some assessments are more applicable if the investigators are interested in improvement after an intervention while others are more applicable for measures of prevalence. Second, they should perform a qualitative study to capture local terms, manifestations, understandings, and stigmas of mental health with which to select a psychosocial construct of interest. These studies would include in-depth interviews, focus groups and key informants.21 The third step involves using this knowledge to develop a comprehensive psychometric assessment for the construct of interest. The DIME framework recommends adapting an existing assessment by keeping the original items and adding items derived from the qualitative study.43 Including the original items may help comparability between populations regarding mental health burdens, but may also decrease the appropriateness of the assessment to the local context (e.g., assessing auditory hallucinations in some indigenous contexts can yield false positives, as hearing voices of ancestors is considered normal and acceptable).40 The fourth step includes pilot testing, re-testing, and improving the assessment where necessary. These steps should allow researchers to investigate key aspects of the assessment: its internal and test-retest reliability and validity. This generalized solution is the process most likely to produce an accurate and useful assessment, though it often requires access to more funding, skilled personnel, and time than is typical in teams conducting psychometric assessments in low-resource settings. Therefore, although open-access, in-depth guides to the DIME framework are available online, it can be a highly resource-intensive strategy to employ.

Cognitive interviewing with subsequent item adjustment and testing

Cognitive interviewing is a flexible set of methods for evaluating an existing assessment, or adapting an assessment to specific contexts.44 The goal of a cognitive interview is to qualitatively discover how subjects come to their answer on a survey question and what difficulties they faced in coming to that answer. Both being necessary for ensuring the goals of a survey are being reached. Two popular methods of cognitive interviewing are the think-aloud technique and the verbal probing technique.44 In the think-aloud technique, interviewers ask participants to speak all of their thoughts as they respond to a survey question. This technique requires little training for the interviewer, but may place additional burdens on the participant. Thinking through one's thought processes out loud does not always come naturally, and thus, may require training of the participants. In the verbal probing technique, the interviewer administers the items of the questionnaire, and then afterwards, explores the participants’ understanding of the questions, item by item. This technique requires slightly more planning when probes are scripted, but it typically takes less time to conduct than the think-aloud technique by minimizing the chances for participants to elaborate on thinking tangential to the survey. Cognitive interviewing can also be used to evaluate response categories. After the interviewing process, researchers can use the feedback provided to adjust questions’ wording, so that they more clearly address the researchers’ questions of interest. After editing or dropping of items, the revised scale can be tested for its psychometric properties with the population of interest.

Training of survey teams

Training of interviewers is a necessary first step to beginning assessments in global limited resource contexts. First and foremost, investigators need to gauge interviewers’ comfort with assessing for mental health issues. This is particularly useful when asking non-psychiatric specialists to conduct assessments. If interviewers have an internalized attitude about people with mental illness or are concerned that they themselves may be stigmatized if they administer the instrument, then this must be navigated at the start. A particularly useful strategy is to explain the importance of the assessment (e.g., ‘We need to determine levels of depression in order to help people better engage in diabetes care because the research shows that depression interferes with adherence to medications’). Second, interviewers will need to know that they can be flexible with the terminologies they use in administering the scale in order to increase participants’ understanding of items. Reading straight from the printed measure not only decreases connectedness between the interviewer and participant, making it uncomfortable for the participant to ask clarifying questions, but it also increases the chances the participant will not understand the items. Third, trainers should instruct interviewers in using visual prompts to help orient the participants to the response scales. Sometimes, simply showing a print-out of numerical response choices in large print and asking the participant to point to their response can help to coach the participant into choosing a response. Fourth, role-playing and pilot testing of the interviews can be helpful for trainers to supervise and offer feedback on the process. Role-playing in a larger group can be one way of ensuring that all in the group administer the instrument in the same way. Inter-rater reliability should be checked, and training examples repeated until good inter-rater reliability is achieved. This type of training can be done in a few hours, and potentially require only one day for interviewers from different regions to come together in one setting for a group training.

However, as a project scales up, training in this way may become more difficult or impractical (e.g. increased cost and time) to conduct as a single large group while maintaining good inter-rater reliability. Therefore, it becomes even more important that trainings are planned throughout the course of a project's growth. One option to address scaling up is to conduct a series of smaller trainings led by the same interviewers with comparisons between members of training groups. Another, less costly solution is to use video recordings of a series of patient interviews so interviewers can refresh their own training while training others and throughout the normal course of their data collection. Interviewers can rate the recorded patients, discuss their ratings with other interviewers, and continue to rate recorded patients until desirable reliability is established both between and within interviewers. As training interviewers is a necessary first step to a project, ensuring inter-rater reliability throughout a scale up is a particularly pressing concern.

Conclusions

Mental health will continue to be missing from the agenda of major global health donors and health systems in low-resource settings while there is a lack of quality data regarding the burden of mental illness in these settings. Valid, comparable psychometric assessment is difficult under the best circumstances, and this becomes much more difficult in low-resource settings, where ‘gold standard’ tools must be adapted and applied across cultures, and with limited training or support this task is much more difficult. Nonetheless, the steps necessary to accomplish this have not previously been outlined in the literature.

Researchers face numerous challenges conducting psychometric assessment in low-resource settings. Cross-cultural adaptation requires a thorough understanding of local idioms of distress, often necessitating prolonged qualitative study. Mental illness is often highly stigmatized or taboo. Widely used questionnaire tools with Likert scales are unfamiliar or inappropriate in many contexts, and participants may struggle to bound their experiences within strict time-frames.

Despite these challenges, the global mental health literature does not contain many guides for researchers with the wide range of resources typically found among teams conducting mental health work in global contexts. There are detailed, thorough examples of extended qualitative work producing high-quality assessments,1821 but few publications provide suggestions for conducting this work under common constraints experienced in the field.45,46

This paper outlines several accessible, flexible methods for overcoming common challenges in the field. The DIME process builds on existing tools by adding psychosocial constructs from the target population, emphasizing comparability of the measure across countries and contexts. This process typically necessitates high levels of researcher training, funding, and time availability. Cognitive interviewing typically requires fewer resources and is still useful in generating tools that accurately reflect local constructs. Specifically, it focuses on the subtleties and specificities of participant experience. Lastly, we outlined efficient means of ensuring adequate interviewer training. Inter-rater reliability is essential to all projects—both for accurate assessment and effective interventional support. These three methods, added to the complement of tools used by mental health researchers working in low-resource settings, will significantly improve the quality of data available and assist in narrowing the treatment gap. We hope that this paper works to both encourage and enable more mental health researchers to attend to the importance of cross-cultural validation of mental health assessment.

Authors’ contributions: All authors contributed equally to the design, interpretation, and writing of the manuscript. RLT and CGK wrote the literature review; DR provided the overall frame, wrote the case studies and other sections of the manuscript. All authors read and agreed on the final version. DR is the guarantor of the manuscript.

Acknowledgments: The authors thank our fellow faculty and student investigators who provided much of the case examples for the paper, and for sharing their experiences and data for use in our manuscript.

Funding: None.

Competing interests: None declared.

Ethical approval: Not required.

References

1

Prince
M
,
Patel
V
,
Saxena
S
et al. .
No health without mental health
.
Lancet
2007
;
370
:
859
77
.

2

Group
LGMH
.
Scale up services for mental disorders: a call for action
.
Lancet
.
2007
;
370
:
1241
52
.

3

Saxena
S
,
Paraje
G
,
Sharan
P
et al. .
The 10/90 divide in mental health research: trends over a 10-year period
.
Br J Psychiatry
2006
;
188
:
81
2
.

4

Jordans
MJ
,
Chisholm
D
,
Semrau
M
et al. .
Indicators for routine monitoring of effective mental healthcare coverage in low-and middle-income settings: a Delphi study
.
Health Policy Plann
2016
;
1
:
7
.

5

Chisolm
D
.
Dollars, DALYs and decisions: economic aspects of the mental health system
.
Geneva
:
World Health Organization
;
2006
.

6

Kagee
A
,
Tsai
AC
,
Lund
C
,
Tomlinson
M
.
Screening for common mental disorders in low resource settings: reasons for caution and a way forward
.
Int Health
2013
;
5
:
11
4
.

7

Canino
G
,
Alegría
M
.
Psychiatric diagnosis – is it universal or relative to culture
.
J Child Psychol Psychiatry
2008
;
49
:
237
50
.

8

Ali
GC
,
Ryan
G
,
De Silva
MJ
.
Validated screening tools for common mental disorders in low and middle income countries: a systematic review
.
PLOS ONE
2016
;
11
:
e0156939
.

9

APA
.
Diagnostic and Statistical Manual of Mental Disorders
. 5th ed.
Washington, DC
:
American Psychiatric Association
;
2013
.

10

WHO
.
International Statistical Classification of Diseases and Related Health Problems (10th revision)
.
Geneva
:
World Health Organization
;
2010
.

11

Nunnally
J
,
Bernstein
I
.
Psychometric Theory
. 3rd ed.
New York
:
McGraw Hill
;
1994
.

12

Hambleton
R
,
Swaminathan
H
,
Rogers
H
.
Fundamentals of Item Response Theory
.
Newbury Park, CA
:
Sage Publications
;
1991
.

13

Twist
K
,
Stahl
D
,
Amiel
SA
et al. .
Comparison of depressive symptoms in type 2 diabetes using a two-stage survey design
.
Psychosom Med
2013
;
75
:
791
7
.

14

Kleinman
AM
.
Anthropology and psychiatry. The role of culture in cross-cultural research on illness
.
Br J Psychiatry
1987
;
151
:
447
54
.

15

Kleinman
AM
.
Depression, somatization and the ‘new cross-cultural psychiatry
.
Soc Sci Med
1977
;
11
:
3
9
.

16

Abubakar
A
,
Kalu
RB
,
Katana
K
et al. .
Adaptation and latent structure of the Swahili version of Beck Depression Inventory-II in a low literacy population in the context of HIV
.
PLos One
2016
;
11
:
e0151030
.

17

WHO
.
Mental Health Atlas-2005
. Rev ed.
Geneva
:
World Health Organization
;
2005
.

18

Bolton
P
.
Cross-cultural validity and reliability testing of a standard psychiatric assessment instrument without a gold standard
.
J Nerv Ment Dis
2001
;
189
:
238
42
.

19

Kohrt
BA
,
Jordans
MJ
,
Tol
WA
et al. .
Validation of cross-cultural child mental health and psychosocial research instruments: adapting the Depression Self-Rating Scale and Child PTSD Symptom Scale in Nepal
.
BMC Psychiatry
2011
;
11
:
127
.

20

Kaiser
BN
,
Kohrt
BA
,
Keys
HM
et al. .
Strategies for assessing mental health in Haiti: Local instrument development and transcultural translation
.
Transcult Psychiatry
2013
;
50
:
532
58
.

21

Bass
JK
,
Ryder
RW
,
Lammers
MC
et al. .
Post-partum depression in Kinshasa, Democratic Republic of Congo: Validation of a concept using a mixed-methods cross-cultural approach
.
Trop Med Int Health
2008
;
13
:
1534
42
.

22

Weaver
LJ
,
Kaiser
BN
.
Developing and testing locally derived mental health scales: examples from North India and Haiti
.
Field Methods
2015
;
27
:
115
30
.

23

Epstein
J
,
Santo
RM
,
Guillemin
F
.
A review of guidelines for cross-cultural adaptation of questionnaires could not bring out a consensus
.
J Clin Epidemiol
2015
;
68
:
435
41
.

24

Haroz
EE
,
Bass
JK
,
Lee
C
et al. .
Adaptation and testing of psychosocial assessment Instruments for cross-cultural use: an example from the Thailand Burma border
.
BMC Psychol
2014
;
2
:
31
.

25

Patel
V
,
Simunyu
E
,
Gwanzura
F
et al. .
The Shona Symptom Questionnaire: the development of an indigenous measure of common mental disorders in Harare
.
Acta Psychiatr Scand
1997
;
95
:
469
75
.

26

Núñez-de la Mora
A
,
Chatterton
RT
,
Choudhury
OA
et al. .
Childhood conditions influence adult progesterone levels
.
PLoS Med
.
2007
;
4
:
e167
.

27

Mazharul Islam
M
,
Ataharul Islam
M
,
Chakroborty
N
.
Fertility transition in Bangladesh: understanding the role of the proximate determinants
.
J Biosoc Sci
2004
;
36
:
351
69
.

28

Kleinman
A
.
Rethinking Psychiatry
.
New York
:
Simon and Schuster
;
2008
.

29

Bass
JK
,
Bolton
PA
,
Murray
LK
.
Do not forget culture when studying mental health
.
Lancet
.
2007
;
370
:
918
9
.

30

Rao
D
,
Young
M
,
Raguram
R
.
Culture, somatization, and psychological distress: symptom presentation in South Indian patients from a public psychiatric hospital
.
Psychopathology
2007
;
40
:
349
55
.

31

Kohrt
BA
,
Hruschka
DJ
.
Nepali concepts of psychological trauma: the role of idioms of distress, ethnopsychology and ethnophysiology in alleviating suffering and preventing stigma
.
Cult Med Psychiatr
2010
;
34
:
322
52
.

32

Rao
D
,
Kumar
S
,
Mohanraj
R
et al. .
The impact of domestic violence and depressive symptoms on preterm birth in South India
.
Soc Psychiatry Psychiatr Epidemiol
2016
;
51
:
225
32
.

33

Kishwar
M
.
Law against domestic violence and abuse: underused or abused
.
Manushi
2000
;
120
:
17
24
.

34

Ellsberg
M
,
Jansen
HA
,
Heise
L
et al. .
Intimate partner violence and women's physical and mental health in the WHO multi-country study on women's health and domestic violence: an observational study
.
Lancet
2008
;
371
:
1165
72
.

35

Kroenke
K
,
Spitzer
R
,
Williams
J
.
The PHQ-9: validity of a brief depression severity measure
.
J Gen Int Med
2001
;
16
:
606
13
.

36

Hanlon
C
,
Medhin
G
,
Alem
A
et al. .
Detecting perinatal common mental disorders in Ethiopia: validation of the self-reporting questionnaire and Edinburgh Postnatal Depression Scale
.
J Affect Disord
2008
;
108
:
251
62
.

37

Stewart
RC
,
Bunn
J
,
Vokhiwa
M
et al. .
Common mental disorder and associated factors amongst women with young infants in rural Malawi
.
Soc Psychiatry Psychiatr Epidemiol
2010
;
45
:
551
9
.

38

Bass
JK
,
Annan
J
,
McIvor Murray
S
et al. .
Controlled trial of psychotherapy for Congolese survivors of sexual violence
.
N Engl J Med
2013
;
368
:
2182
91
.

39

Secor
AM
,
Wahome
E
,
Micheni
M
et al. .
Depression, substance abuse and stigma among men who have sex with men in coastal Kenya
.
AIDS
2015
;
29
(
Suppl 3
):
S251
9
.

40

Thakker
J
,
Ward
T
.
Culture and classification: the cross-cultural application of the Dsm-IV
.
Clin Psychol Rev
1998
;
18
:
501
29
.

41

Shweder
R
,
Bourne
E
. Does the Concept of the Person Vary Cross-Culturally? In:
Marsella
A
,
White
G
, Eds.
Cultural Conceptions of Mental Health and Therapy
.
Dordrecht, Holland
:
D. Reidel Publishing Company
;
1982
.

42

Betancourt
TS
,
Bass
J
,
Borisova
I
et al. .
Assessing local instrument reliability and validity: a field-based example from northern Uganda
.
Soc Psychiat Epidemiol
2009
;
44
:
685
92
.

43

Andresen
EM
,
Malmgren
JA
,
Carter
WB
,
Patrick
DL
.
Screening for depression in well older adults: evaluation of a short form of the CES-D (Center for Epidemiologic Studies Depression Scale)
.
Am J Prev Med
1994
;
10
:
77
84
.

44

Willis
GB
.
Cognitive Interviewing: A Tool for Improving Questionnaire Design
.
London
:
Sage Publications
;
2004
.

45

Hanlon
C
,
Luitel
NP
,
Kathree
T
et al. .
Challenges and opportunities for implementing integrated mental health care: a district level situation analysis from five low-and middle-income countries
.
PLoS One
2014
;
9
,
e88437
.

46

De Silva
MJ
,
Lee
L
,
Fuhr
DC
et al. .
Estimating the coverage of mental health programmes: a systematic review
.
Int J Epidemiol
2014
;
43
:
341
53
.

Comments

0 Comments
Submit a comment
You have entered an invalid code
Thank you for submitting a comment on this article. Your comment will be reviewed and published at the journal's discretion. Please check for further notifications by email.