Hostname: page-component-586b7cd67f-2brh9 Total loading time: 0 Render date: 2024-11-22T21:03:38.679Z Has data issue: false hasContentIssue false

Assessing your mood online: acceptability and use of Moodscope

Published online by Cambridge University Press:  13 November 2012

G. Drake*
Affiliation:
Institute of Psychiatry, King's College London, UK
E. Csipke
Affiliation:
Institute of Psychiatry, King's College London, UK
T. Wykes
Affiliation:
Institute of Psychiatry, King's College London, UK
*
*Address for correspondence: Mr G. Drake, Institute of Psychiatry, King's College London, De Crespigny Park, London SE5 8AF, UK. (Email: [email protected])
Rights & Permissions [Opens in a new window]

Abstract

Background

Moodscope is an entirely service-user-developed online mood-tracking and feedback tool with built-in social support, designed to stabilize and improve mood. Many free internet tools are available with no assessment of acceptability, validity or usefulness. This study provides an exemplar for future assessments.

Method

A mixed-methods approach was used. Participants with mild to moderate low mood used the tool for 3 months. Correlations between weekly assessments using the Patient Health Questionnaire (PHQ-9) and the Generalized Anxiety Disorder Assessment (GAD-7) with daily Moodscope scores were examined to provide validity data. After 3 months, focus groups and questionnaires assessed use and usability of the tool.

Results

Moodscope scores were correlated significantly with scores on the PHQ-9 and the GAD-7 for all weeks, suggesting a valid measure of mood. Low rates of use, particularly toward the end of the trial, demonstrate potential problems relating to ongoing motivation. Questionnaire data indicated that the tool was easy to learn and use, but there were concerns about the mood adjectives, site layout and the buddy system. Participants in the focus groups found the tool acceptable overall, but felt clarification of the role and target group was required.

Conclusions

With appropriate adjustments, Moodscope could be a useful tool for clinicians as a way of initially identifying patterns and influences on mood in individuals experiencing low mood. For those who benefit from ongoing mood tracking and the social support provided by the buddy system, Moodscope could be an ongoing adjunct to therapy.

Type
Original Articles
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - SA
The online version of this article is published within an Open Access environment subject to the conditions of the Creative Commons Attribution-NonCommercial-ShareAlike licence . The written permission of Cambridge University Press must be obtained for commercial re-use.
Copyright
Copyright © Cambridge University Press 2012

Introduction

Guided self-help, psycho-educational interventions and computer-assisted psychotherapy (CAT) place more responsibility with the client in the treatment and management of their symptoms. CAT, in particular, is an increasingly popular method of addressing the growing demand for psychological help. An early review of CAT found high levels of satisfaction among patients and generally good outcomes for mild to moderate psychological distress (Wright & Wright, Reference Wright and Wright1997). The authors highlight the advantages of using computers in therapy, including increasing confidentiality, reducing cost, promoting psycho-education and increasing access; many internet-based programmes can be used at home, 24 hours a day.

A meta-analysis of 12 randomized controlled trials of internet-based cognitive therapy found small to medium effect sizes for the treatment of symptoms of depression and anxiety (Speck et al. Reference Speck, Cuijpers, Nyklicek, Riper, Keyzer and Pop2007), and a review of computerized cognitive-behavioural therapy (CBT) packages for depression provided tentative support for their efficacy (Foroushani et al. Reference Foroushani, Schneider and Assareh2011).

Alongside licensed programmes in treatment guidance, there has been a rapid growth in online self-help websites. Entering ‘self-help depression’ into Google brings up more than 77 million results, among which are countless online therapeutic tools. Although such resources can widen access to psychological services, the vast majority are never evaluated in terms of efficacy, feasibility and acceptability among users, or whether they cause inadvertent harm.

The focus of this paper is on free online self-help treatments, and in particular a new and entirely service-user-developed therapeutic tool: Moodscope. It has three main features. First, it allows users to rate their mood using an adapted version of the validated Positive and Negative Affect Schedule (PANAS; Watson et al. Reference Watson, Clark and Tellegen1988) and provides graphs of mood over time. Second, it allows personal annotations that help the user to identify potential influences on mood. Third, social support is provided by allowing users to nominate ‘buddies’, who automatically receive notifications of the mood rating, to which they can respond with feedback and support.

Mood rating is widely used in psychological treatments for depression and anxiety disorders, usually as an ongoing progress measure. It is less commonly a main mechanism of change in therapy. ‘Mood labelling’, defined as ‘the ability to identify and characterize one's mood states’, predicts positive affect and high self-esteem (Swinkels & Giuliano, Reference Swinkels and Giuliano1995, p. 934). However, the same authors found that ‘mood monitoring’, or ‘a tendency to scrutinise and focus on one's moods’, predicted negative affect and greater rumination on negative thoughts (Swinkels & Giuliano, Reference Swinkels and Giuliano1995, p. 934). It is therefore vital to examine Moodscope users' responses toward tracking and annotating influences on their mood, as it may produce unwanted effects.

The benefits of high levels of subjective social support for depressive symptoms are well evidenced (George et al. Reference George, Blazer, Hughes and Fowler1989; Moak & Agrwal, Reference Moak and Agrwal2010). However, it is not known whether these benefits will transfer to Moodscope's buddy system, which originates online with an automated notification, or whether users would feel comfortable sharing mood scores with members of their support network.

The psycho-educational, mood-tracking and support features of Moodscope are embedded in many other freely available online programs (e.g. moodtracker.com, moodpanda.com, mood247.com). The aim of this pilot study is to explore the feasibility and acceptability of Moodscope among people experiencing mild to moderate low mood. However, it also provides an exemplar of how a mixed methods approach is useful in this area. We incorporate ongoing quantitative assessments of mood from the site and two validated mood measures, alongside focus group and questionnaire feedback on acceptability upon completion of the study, and the outcomes are likely to be relevant for other similar computerized treatment developments.

Method

Design

The design comprised a 3-month longitudinal follow-up study of the use and acceptability of a novel online mood-monitoring tool, Moodscope, using a mixed methods approach. Use and acceptability were assessed through questionnaires and focus groups at the end of 3 months. Mood was assessed weekly using the Patient Health Questionnaire (PHQ-9; Spitzer et al. Reference Spitzer, Kroenke and Williams1999) and the Generalized Anxiety Disorder Assessment (GAD-7; Spitzer et al. Reference Spitzer, Kroenke, Williams and Lowe2006). Changes in scores over time and correlations between these measures and daily Moodscope scores were examined to provide data on the validity of the tool.

Participants and recruitment

Participants were recruited from waiting rooms in six general practitioner (GP) surgeries across three South London boroughs. The inclusion criteria were: (1) a PHQ-9 score in the range requiring psychological treatment (5 to 14, mild to moderate depression) and (2) regular internet access.

Following informed consent, individuals were screened (Ethics Ref. No. 10/H0803/42).

Online therapeutic tool: Moodscope

Completing the mood assessment involves visiting a website (www.moodscope.com), signing in, and rating mood by selecting the extent to which each of 20 interactive mood-adjective playing cards describes current mood. The rating is based on the extent that users currently experience the 20 (positive and negative) emotions (e.g. ‘proud’, ‘nervous’, ‘determined’). Cards appear one at a time and the user selects either the ‘flip’ or ‘rotate’ button until their selection (‘very slightly or not at all’, ‘a little’, ‘quite a bit’, ‘extremely’) is highlighted. They then click their selection and the next card appears. Test completion takes approximately 5 min and generates a daily score to enable mood tracking over time. Historical scores can be retrieved as a graph to which the user can add annotations.

Users receive two forms of feedback on mood: automated feedback from the website, which comprises a short, two-paragraph supportive summation based on comparisons between the most recent and previous scores (e.g. ‘things aren't as good as they looked the last time you took the test and got 35%’); and optional feedback from a ‘buddy’ (or several buddies) who the user can nominate from their support network. Users are free to choose whether or not they add ‘buddies’. After each completion of the mood assessment, an email containing the mood score is sent automatically to the buddy. Buddies have access to participants' graph but not the annotations.

Measures

PHQ-9

The PHQ-9 is a self-report screening and diagnostic nine-item measure, corresponding to the diagnostic criteria for DSM-IV major depressive disorder. Each item is scored from ‘not at all’ (0) to ‘nearly every day’ (3): a score of 15–27 indicates severe depression, 5–14 indicates mild to moderate depression (Spitzer et al. Reference Spitzer, Kroenke and Williams1999).

GAD-7

GAD-7 is a seven-question screening measure for generalized anxiety disorder. Rated on a 0–3 scale, a score of 15–21 indicates severe anxiety, 5–14 indicates mild to moderate anxiety (Spitzer et al. Reference Spitzer, Kroenke, Williams and Lowe2006).

Moodscope Usability and Usefulness Questionnaire

This questionnaire includes 17 structured questions to assess the helpfulness of the tool, ease of use, and the type and frequency of feedback from buddies. Open-ended questions about using the tool and suggestions for improvement were also included at the end of the questionnaire.

Qualitative data

We ran two focus groups, of four and five participants respectively. Preliminary analyses of the first group fed into the second group discussion as suggested by Corbin & Strauss (Reference Corbin and Strauss2008) . The topic guide included the concept of mood monitoring and the various components of the Moodscope website rather than individuals' experiences with low mood. Focus groups were transcribed verbatim and analysed using thematic analysis.

Procedure

Participants signed up online to Moodscope for a period of 3 months. They were asked to complete the tool daily if possible but, failing this, as regularly as they could. A member of the research team was added as a silent buddy so that participant data were accessible independently of the Moodscope website.

The study researcher contacted participants by telephone or email to complete the weekly GAD-7 and PHQ-9 assessments. At the end of the first week of using Moodscope, participants who wanted to were asked to contact their nominated buddies, who were notified by email of the request. After 3 months, participants completed the Moodscope Usability and Usefulness Questionnaire, and those who consented attended the focus groups.

Focus group analysis

A thematic analysis of the two focus groups was carried out using NVivo9 software (QSR International Pty Ltd). Emerging themes were identified and their relative importance determined by frequency and ‘uniqueness’. An independent rater coded one focus group. Differences in coding schemes were discussed until consensus was reached. To provide triangulation, the agreed first-level coding scheme was then applied to the open-ended questionnaire responses of those participants who did not attend the focus groups; these responses aligned with the focus group data and no new codes were identified. Finally, the coded data were explored to identity overarching themes.

While moving inductively through the levels of analysis, a process of ‘constant comparison’ was used; this involved continually checking emerging themes against transcripts and first-level codes to refine them (Corbin & Strauss, Reference Corbin and Strauss2008).

Results

Of the 20 participants who began the study, four dropped out within the first 2 weeks. The remaining 16 completed the entire 12 weeks. Table 1 shows the demographic characteristics of all participants.

Table 1. Descriptive comparison of participants who completed the trial with those who dropped out: sociodemographic variables

s.d., Standard deviation.

Validity and changes in mood over time

Data in Table 2 illustrating changes in mood over time are interpreted descriptively. They give no indication of efficacy. Pearson's coefficient was used as an indicator of correlations between Moodscope, PHQ-9 and GAD-7 scores.

Table 2. Mean weekly GAD-7, PHQ-9 and Ms scores, and mean visits to Ms per week

GAD-7, Seven-question Generalized Anxiety Disorder Assessment; PHQ-9, nine-item Patient Health Questionnaire; Ms, Moodscope; s.d., standard deviation.

a A higher score signifies better mood.

Changes in mood over time

The most positive change in mood occurred within the first 2 weeks of Moodscope use. Importantly, no decrease in mood was found.

Validity as a daily measure of mood

As shown in Table 3, average weekly Moodscope scores were significantly correlated with PHQ-9 scores for all weeks and with GAD-7 scores for all except week 4, suggesting a valid measure of mood

Table 3. Pearson's r scores and significance levels for weekly PHQ-9–Moodscope and GAD-7–Moodscope correlations

p < 0.01, ** p < 0.05.

However, the focus groups elucidated several issues pertinent to the perceived accuracy of Moodscope scores. Although instructed to describe how they feel at the moment of completing the assessment, participants reported that a more reflective style of completion was less susceptible to reactive fluctuations in mood over the course of the day, and more representative of overall daily mood.

If the questions were phrased as over the course of today how excited, inspired have you felt, my answers would have been very different from how do you feel right now. I think that idea of looking back in the evening; I think the evening's better than the day.

Furthermore, not all PANAS mood adjectives were seen as valid descriptors of daily mood. Several participants struggled to define and differentiate similar words such as ‘jittery’ and ‘nervous’; one participant, for whom English was a second language, said the ‘subtle differences don't mean as much’. Some felt that overall the assessment did not encapsulate daily mood.

There were some days when I was feeling something different and there was something missing … I wanted another word in there … there was something that wasn't captured.

Acceptability of Moodscope as an online therapeutic tool

When asked how helpful Moodscope was, the average response was 5.6/10 (see Table 4). Attitudes towards helpfulness varied substantially between different aspects of the tool.

Table 4. Moodscope Usability and Usefulness Questionnaire responses

Ms, Moodscope; s.d., standard deviation.

a Highest answer is 10, higher answer is more positive response.

Tracking mood over time

Participants responded favourably when asked about the helpfulness of tracking mood over time (7.2/10) but there was a range in responses. Two participants with less favourable opinions explained their reasons:

I personally did not find a numerical quantification of my mood helpful. I feel that daily introspection is a retrograde step.

I would be in a pretty decent mood, and I would complete the questionnaire accordingly, and then the results would indicate that I was actually in a much worse mood than I thought I was. This would make me doubt my ability to assess my inner state which would put me in a worse mood.

Unexpectedly low mood scores were also discussed in the focus groups. A few participants expressed similar opinions but, for the majority, the unexpected mood scores were beneficial, leading to raised awareness and acting as a precursor to taking action:

Some days it was very low, and I felt going into it that it would be the same as yesterday … so it did force me to analyse: well what happened then? What is going on?’

However, tracking mood did not generally provide new insights but only affirmed or validated what participants already knew. Most appreciated the affirmation:

It just affirms what you've been feeling in the day, so you can be like ‘hey I'm having a bad day, that's alright’, rather than worrying about it.

Seeing and making annotations

The helpfulness of seeing and making annotations on the graph varied (6.4/10). Participants for whom the assessment provided no new insight stated that they were already aware of the influences on their mood; and participants who felt negatively about tracking mood also felt negatively about annotating influences. It was participants for whom the process provided welcomed new insights into mood who found annotating the graph useful.

It really helps to see a picture; it really brings it to life. But I was quite surprised by the changes. I was a bit concerned about myself but then you click up an explanation and go ‘oh yeah’.

The automated feedback

Automated feedback was not seen as helpful (4.9/10). However, responses ranged from 1 to 10. There were two different types of automated feedback: a percentage score after each test that is plotted on the graph and supportive comments generated by the site. The more factual feedback was preferred.

You can tell it's automated [feedback] and all it's done is the maths for you … ‘Congratulations or commiserations, you've gone up or down’, it's very formulaic in the way it was presented … I know, some people, it made them feel worse … it was a little bit patronizing.

You almost wanted to get to the end, to the graph point, because that was a bit more logical and serious.

Usability of the website

Participants reported Moodscope as easy to learn to use (9.3/10), easy to complete (9.0/10) and easy to get online (7.7/10) and reasonably easy to remember to complete (7.0/10). The focus groups elaborated on issues relating to access and privacy.

I think personally for me it was whether you had the space – you need the internet to do it … you don't really want to do it at work because you don't want your work colleagues to see it and you don't really want to open up your lap top when you got home from work at 8 o'clock at night.

Frequency and regularity of completion

Although completing the mood assessment was not too time-consuming (6.4/10), daily completion was seen by 50% of participants as ‘too frequent’, and by 21% as ‘slightly too frequent’. Table 2 highlights the dwindling use of Moodscope over the course of 12 weeks, with a mean of only 2.6 visits a week or lower after week 8. Five participants did not complete any Moodscope assessments in week 12.

The focus groups highlighted the importance of motivation for continued use, with one participant describing completing Moodscope as ‘like going to the gym’. Maintaining motivation was a particular issue for participants who were not discovering anything new, and during periods in which their mood was stable.

I'm hoping that ultimately it will flag up when I'm on the way back down again; the only thing is that these things don't happen very often and am I going to be up for doing Moodscope everyday when everything seems to be running smooth?

However, the importance of regular completion for identifying changes in mood was recognized in the focus groups.

I think only if you do it daily will you get a pattern, because when you just do it every few days, you'll do it when you're in a good mood, and when you're feeling crap you won't get around to it or, if you're having a really good head-space, you'll think ‘I'll do it later’.

Non-use and attrition

Three of the four participants who dropped out felt more negatively about Moodscope: one participant's primary reason for dropping out was the nature of the feedback received; she found the automated support patronizing and the objective representation of her mood detrimental. Two participants did not find the tool at all helpful and expected it be ‘more like therapy’. The fourth simply reported not having time to complete the study.

Social support: the buddy system

Nine participants chose to have a buddy and seven of these completed the buddy-related items. The mean helpfulness of feedback was 6.6/10. Eighty-five per cent of participants felt that the feedback they received was frequent enough, and the right length, 28% felt feedback came ‘slightly too quickly’, and the remainder felt it was ‘adequate’. The most common type of feedback was email exchange (‘often’ for one participant, ‘sometimes’ for three and ‘rarely’ for two participants), with telephone calls and face-to-face meeting being less common (Table 3).

Types of feedback and support

Genuine concern and subtle encouragement in feedback were most appreciated. There were mixed attitudes towards the use of humour; participants with lower mood were less receptive. Several participants simply appreciated knowing someone was receiving their score.

A big part of depression is feeling alone and even if they're just getting an email that says so and so is having a crap day, then at least someone else there knows.

Burden and guilt

Some participants felt uncomfortable receiving feedback and several participants highlighted the negative consequences of the new dynamic created by the perceived burden placed on their buddy:

I did it once and received a text from him saying ‘are you okay?’ and I thought ‘what are you going on about? Leave me alone.’ So it didn't make me feel very comfortable.

Concerns about the potential impact on buddies were discussed by all participants, leading some to feel guilty, particularly when their mood score seemed to be inaccurately low:

I think it's a huge burden … I think they would've felt that they have to support me if they've seen all these thirties; they'd think I was suicidal and I wasn't anywhere near it. I just think it wasn't fair.

This perceived burden led one participant to moderate her responses on the mood assessment but most participants said it did not affect their score.

I didn't change the data. I didn't really; the Buddy was my housemate as well. I suppose I did like play it down a little bit.

Just less than half the sample completed the study without a buddy. These participants gave the negative aspects outlined above as reasons for their decision.

The potential clinical role of Moodscope

In the focus groups, participants discussed their uncertainty about what the tool was for: ‘Is it a monitoring tool or a counselling tool?’ Most participants agreed that it could be useful for the initial identification of patterns and influences on mood before or with another therapy.

I think you could use it as a support. You know other cases where you have diaries and you've got to track things, could you use that instead? But it's a support, not to solve.

One participant thought Moodscope would be beneficial for people who may feel uncomfortable going to the doctor.

I think for a lot of guys I know who wouldn't want to say ‘I'm struggling doctor’ I think that would be a great use to them. I think if people know about it in the privacy of their own homes they can try and help themselves … to identify what's bothering them and maybe give them the confidence to then go a step further and see the doctors. That's where I see its use as being.

Finally, despite questioning the benefits of the tool for themselves, participants were favourable about acceptability and feasibility of Moodscope for new users.

I think If you're experiencing mental health for the first time then yeah that's a really good thing, but it sounds like were all quite far down the help-line. I know I'm four or five years down the line of being in the system and having help it gets to the point where you're pretty aware [of your mood].

Discussion

Average weekly Moodscope scores were significantly correlated with PHQ-9 scores for depression for all weeks and GAD-7 scores for anxiety for all except 1 week, suggesting a valid measure of mood. There is strong internal consistency and external evidence of convergent and discriminant validity in the original PANAS development study (Watson et al. Reference Watson, Clark and Tellegen1988). However, the adjectives were validated over 20 years ago in a private US college student and employee population and this may be why some of our participants thought several of the adjectives inappropriate descriptors of daily mood. The current validity and acceptability of the PANAS adjectives may need to be revisited.

Furthermore, participants preferred to reflect on overall daily mood rather than momentary reactive fluctuations; in the original PANAS study these were found to be equivalent (Watson et al. Reference Watson, Clark and Tellegen1988). However, Myin-Germeys et al. (Reference Myin-Germeys, Oorschot, Collip, Lataster, Delespaul and van Os2009) argue that the dynamic patterns of reactivity to the environment, captured with ‘in the moment’ assessments, are essential features of affective disorders. They might also enable participants to begin to identify contextual factors that affect them, so this is an issue for future research.

Participants acknowledged the importance of regular completion and recognized the potential for a distorted portrayal of mood with less regular completion, which has also been found by Ben-Zeev et al. (Reference Ben-Zeev, McHugo, Xie, Dobbins and Young2012) . However, rates of use dwindled substantially over the 12 weeks and participants felt that daily use of the tool was too frequent. Alongside fitting Moodscope into a routine, four factors seemed to influence use and acceptability:

  1. (1) The automated supportive feedback was perceived as patronizing and unhelpful. A more straightforward layout would have been preferred.

  2. (2) Whether the tool was providing new insights into mood-affected motivation to continue with regular completion; participants who reported being aware of influences and fluctuations in their mood reported most difficulties with motivation.

  3. (3) Participants' attitudes towards the concept of mood tracking affected acceptability of the tool; several participants reported not wanting to dwell on low mood. Such preferences should be recognized, particularly in light of the negative consequences of overly scrutinizing one's mood found by Swinkels & Giuliano (Reference Swinkels and Giuliano1995) .

  4. (4) Attitudes toward the buddy system varied: some participants were concerned about the intrusiveness of the system and the new dynamic it created with their friends. Half the sample completed the tool without a buddy and all participants discussed the burden placed on their buddy. The one-way automated nature of the email-based social support system may preclude some of the benefits of subjective social support for depression found in previous research (George et al. Reference George, Blazer, Hughes and Fowler1989; Moak & Agrwal, Reference Moak and Agrwal2010). Feedback generally from buddies was low in frequency. Guidelines on what is expected from buddies may improve this aspect of the tool.

Each of these factors relates to the uncertainty expressed by all participants about the role of, and target group for, Moodscope. Participants felt it would be useful for people experiencing low mood for the first time, which was not the case for the majority of our sample. As with all psychological therapies, the concept of mood tracking will not be acceptable to all. Several individuals simply did not find it helpful; this should be expected in the wider population.

Refining the target group and role of the tool may go some way to improving rates of use. However, previous research found similar problems with continued motivation to use open-access online tools, with rates of completion as low as 1% for a 12-week online program for panic (Farvolden et al. Reference Farvolden, Denissof, Selby, Bagby and Rudy2005), and for all modules of an open access site for depression (Christensen et al. Reference Christensen, Griffiths, Korten, Brittliffe and Groves2004).

Eysenbach (Reference Eysenbach2005) highlights the importance of differentiating between attrition from a trial and non-usage of a site, stating that, with internet-based studies, usability and technological factors can affect website adherence. Four participants in our study dropped out of the trial early on but a potentially greater concern in terms of wider acceptability was non-use of the site among participants who completed the study.

Participants in previous web-based intervention studies have reported time constraints, computer access and the burden of the program (too demanding, patronizing or fast paced) as reasons for non-use (Waller & Gilbody, Reference Waller and Gilbody2009). Similarly, in our study, although participants found the tool easy to learn to use and complete, the demands of daily use, some mood adjectives and the automated feedback affected acceptability of the site. The latter two factors can be improved but motivation for regular use among participants who do not feel particularly negative about the site may be more difficult. Providing a definitive time-frame of use, for example a month, may improve motivation, but it may be that daily use is for most users unrealistically demanding.

People who took part in this study had a demographic profile that was not representative of the local community, suggesting that the appeal of this sort of program may be limited. A review of computerized CBT trials by Waller & Gilbody (Reference Waller and Gilbody2009) also found substantial differences between the educational demographics of individuals who took part (26–50% at university level) and the general population (14% higher education nationally), which suggests that this skewed set of characteristics are representative of those who are likely to take part in internet therapies generally.

Limitations

Issues relating to acceptability and non-use of the site are limitations of the tool itself and were exactly the characteristics that we intended to measure. Drop-out from the study, although providing useful data on acceptability, left only 16 pairs for analysis. The results are therefore preliminary and need further examination, particularly in relation to whether the program would have a wide appeal across the educational and socio-economic spectrum. The researcher was a silent buddy. This may have affected the way participants completed the Moodscope assessment but was not mentioned as a possible influence in our final focus groups.

Implications

With appropriate adjustments, Moodscope could be a useful online tool for clinicians to use, for example during waiting times for psychological treatment or a second visit with a GP. A month of daily use of Moodscope could provide a rich impression of influences on mood and patterns in mood fluctuations that might assist in diagnosis and inform treatment choices. These patient-driven continuous diagnostic assessments have been suggested as important to furthering our understanding of mental ill health and/or as ‘unlocking the “black box” of DSM and ICD diagnoses’ (de Groot, Reference de Groot2010).

For individuals who benefit from the buddy system and from mood tracking generally, the tool could be an ongoing adjunct to therapy. This echoes therapists' views that self-help can enhance rather than replace therapist-led therapy (Audin et al. Reference Audin, Bekker, Barkham and Foster2003).

Finally, issues relating to the validity of the mood measure, the acceptability of site features and motivation for regular use are likely to be relevant for many similar online tools. We provide here an exemplar of how the use of a mixed methods approach, in particular the inclusion of qualitative data from participants who drop out along with data from those who complete, can give a thorough impression of a tool's acceptability, and pave the way for larger efficacy trials.

Acknowledgements

This study was funded by National Institute for Health Research (NIHR) Biomedical Research Centre for Mental Health at the South London and Maudsley National Health Trust (NHS) Foundation Trust and Institute of Psychiatry, King's College London.

Declaration of Interest

None.

References

Audin, K, Bekker, HL, Barkham, M, Foster, J (2003). Self-help in primary care mental health: a survey of counsellors' and psychotherapists' views and current practice. Primary Care Mental Health 1, 89100.Google Scholar
Ben-Zeev, D, McHugo, GJ, Xie, H, Dobbins, K, Young, MA (2012). Comparing retrospective reports to real-time/real-place mobile assessments in individuals with schizophrenia and a non-clinical comparison group. Schizophrenia Bulletin 38, 396404.CrossRefGoogle Scholar
Christensen, H, Griffiths, KM, Korten, AE, Brittliffe, K, Groves, C (2004). A comparison of changes in anxiety and depression symptoms of spontaneous users and trial participants of a cognitive behavior therapy website. Journal of Medical Internet Research 6, e46.CrossRefGoogle ScholarPubMed
Corbin, J, Strauss, A (2008). Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory. Sage: Thousand Oaks, CA.CrossRefGoogle Scholar
de Groot, P (2010). Patients can diagnose too: how continuous self-assessment aids diagnosis of, and recovery from, depression. Journal of Mental Health 19, 352362.CrossRefGoogle Scholar
Eysenbach, G (2005). The law of attrition. Journal of Medical Internet Research 7, e11.CrossRefGoogle ScholarPubMed
Farvolden, P, Denissof, E, Selby, P, Bagby, RM, Rudy, L (2005). Usage and longitudinal effectiveness of a web-based self-help CBT behavioural therapy program for panic disorder. Journal of Medical Internet Research 7, 7.CrossRefGoogle ScholarPubMed
Foroushani, P, Schneider, J, Assareh, N (2011). Meta-review of the effectiveness of computerised CBT in treating depression. BMC Psychiatry 11, 131.CrossRefGoogle ScholarPubMed
George, LK, Blazer, DG, Hughes, DC, Fowler, N (1989). Social support and the outcome of major depression. British Journal of Psychiatry 154, 478485.CrossRefGoogle ScholarPubMed
Moak, Z, Agrwal, A (2010). The association between perceived interpersonal social support and physical and mental health: results from the National Epidemiological Survey on Alcohol and Related Conditions. Journal of Public Health 32, 191201.CrossRefGoogle ScholarPubMed
Myin-Germeys, I, Oorschot, M, Collip, D, Lataster, J, Delespaul, P, van Os, J (2009). Experience sampling research in psychopathology: opening the black box of daily life. Psychological Medicine 39, 15331547.CrossRefGoogle ScholarPubMed
Speck, V, Cuijpers, P, Nyklicek, I, Riper, H, Keyzer, J, Pop, V (2007). Internet-based cognitive behaviour therapy for symptoms of depression and anxiety: a meta-analysis. Psychological Medicine 37, 319328.CrossRefGoogle Scholar
Spitzer, RL, Kroenke, K, Williams, JBW (1999). Validation and utility of a self-report version of PRIME-MD: the PHQ primary care study. Primary Care Evaluation of Mental Disorders. Patient Health Questionnaire. Journal of the American Medical Association 282, 17371744.CrossRefGoogle ScholarPubMed
Spitzer, RL, Kroenke, K, Williams, JBW, Lowe, B (2006). A brief measure for assessing generalized anxiety disorder, the GAD-7. Archives of Internal Medicine 166, 10921097.CrossRefGoogle ScholarPubMed
Swinkels, A, Giuliano, TA (1995). The measurement and conceptualization of mood awareness: attention directed towards one's mood states. Personality and Social Psychology Bulletin 21, 934949.CrossRefGoogle Scholar
Waller, R, Gilbody, S (2009). Barriers to the uptake of computerized cognitive behavioural therapy: a systematic review of the quantitative and qualitative evidence. Psychological Medicine 39, 705712.CrossRefGoogle Scholar
Watson, D, Clark, LA, Tellegen, A (1988). Development and validation of brief measures of positive and negative affect: the PANAS scales. Journal of Personality and Social Psychology 54, 10631070.CrossRefGoogle ScholarPubMed
Wright, JH, Wright, AS (1997). Computer-assisted psychotherapy. Journal of Psychotherapy Practice and Research 6, 315329.Google ScholarPubMed
Figure 0

Table 1. Descriptive comparison of participants who completed the trial with those who dropped out: sociodemographic variables

Figure 1

Table 2. Mean weekly GAD-7, PHQ-9 and Ms scores, and mean visits to Ms per week

Figure 2

Table 3. Pearson's r scores and significance levels for weekly PHQ-9–Moodscope and GAD-7–Moodscope correlations

Figure 3

Table 4. Moodscope Usability and Usefulness Questionnaire responses