Missing something? Email us, and we’ll look at updating it.
Last updated: December 10, 2024
In December 2024, Mojo has launched a new feature - an AI-powered chatbot (or bot) that functions as the primary mechanism for delivering existing content from our platform. It acts as a virtual therapist on matters relating to sexual wellbeing, dating, and relationships.
The intended purpose of this feature is to replace our Platform’s existing content delivery system with a tool that more dynamically interacts with users. It replaces the previous Mojo Platform ‘Home’ tab, which suggested Mojo content to complete on a daily basis, but in a more linear programme-based model with less opportunity for personalization.
Users will be able to discuss their issues with the bot, which will utilize their preferences, previous conversations, and activity history to respond appropriately and suggest exercises or content from the Platform to work through next.
We expect that, once fully rolled out, it will become the primary mechanism of interaction between users and the Platform, both when the user has a particular topic they’d like to discuss / work on and also when the user wants a recommendation for what to do next. However, users will still be able to access unlocked content and the community forum via other parts of the Platform, meaning that users do not have to engage with the bot if they prefer not to.
Medical Disclaimer – As with the rest of the product, Mojo’s aim is to provide useful resources to our users. Our Platform provides general wellbeing information only and is not designed or intended to provide you with medical advice, diagnosis, treatment or otherwise be used as a medical tool. Our Platform should not be used as a replacement for a qualified medical professional and you should seek medical advice from a qualified medical professional or specialist before taking, or refraining from, any action on the basis of, or in connection with, your use of our Platform.
This feature will process user data given to us throughout the user journey, including special category data such as health and sex life data, which are already collected as part of the existing Mojo Platform. However, the bot's outputs, memory, etc. are all carefully isolated on a per-user basis. The data subject will be able to see and remove memories relating to their account.
The processing of personal data is necessary for the provision of the Mojo service and the performance of our contract with you as a user. We also ask all users for consent to process special category data, as this is also required for our Platform’s services to work, including use of this specific chatbot feature. Users can still exercise any of their data rights, unchanged with this feature, as described in our Privacy Policy.
We have entered Data Processing Agreements with primary subprocessors of user personal data, including OpenAI and LangChain.
The Mojo Platform makes it clear when the user is interacting with a bot vs a human.
As part of the app onboarding, the user is prompted to choose the therapist persona of the bot (e.g. avatar), which will remain consistent throughout the user’s experience. Future iterations of the Platform may signpost this even further, with the possibility of naming the bot to further distinguish it from Mojo team members. Humans, including members of the Mojo team, will never interact with users via the chatbot screen on the ‘Home’ tab.
On the other hand, all interactions on the community forums (‘Community’ tab) are with humans. Mojo customer support (via Intercom chat or email) uses a mix of automated ‘bot’ responses, especially for triaging, and human conversations - these are signposted using names (e.g. ‘Mojo from Mojo’ denoting an automated response).
Aside from the automated flagging from our safety system, users can click a "report" button, both on individual responses and the conversation as a whole. This will open an Intercom support message, so we can review and follow-up via our regular customer support processes. Users are also encouraged to provide general feedback via customer support channels (Intercom chat or email).
In the earlier stages of testing the bot, a panel of Mojo-employed therapists will be viewing a high proportion of messages to track the quality of the outputs, for example in terms of effectiveness and safety. This will enable us to adjust the bot to improve its performance against these domains. The proportion of messages reviewed will decrease over time, but will be maintained in some format so that we are continually improving.
The bot has been developed on top of existing large language models, rather than training a new model from Mojo data. Currently, the bot uses GPT-4o from OpenAI as the primary model. The bot may therefore be subject to any biases inherent within this model that are not explicitly corrected in prompts. However, as no further Mojo data is used to train the model (aside from direct additions to prompts), this means there should also be no further biases introduced.
The base prompts used to generate the bot’s outputs have been developed by the Mojo team to customize the usage of this model for Mojo’s purposes, and are not generated by AI. Input data points are selected and appended to the prompts in their text format, without any transformation on the part of Mojo. Similarly, outputs are not transformed by Mojo. Any transformations that occur to the data to make it usable by GPT-4o occur on the OpenAI side.
As with many large language models, GPT-4o makes decisions by predicting the most likely next word or phrase based on patterns it has learned from vast amounts of text data. Its ‘logic’ comes from statistical probabilities, not human reasoning.
Importantly, the bot’s outputs are limited to generating replies for users and suggesting content - there are no decisions with major far-reaching consequences made using automation by the bot.
The ‘accuracy’ of the model for Mojo’s purposes is still being evaluated. However, GPT-4o’s accuracy has been extensively studied.
We have established a safety framework that identifies specific situations of risk and instructs the chatbot on how to handle it.
Some examples of such risks include:
The base prompts used to generate the bot’s outputs include instructions in relation to this safety framework. We also use additional evaluative models to understand whether the generated outputs are appropriate to be shown to the user - if these outputs deviate from our guidelines, they are blocked from sending.
To prevent the chatbot from providing crossed data to different users, we have set up our backend infrastructure to only call personal data from the user making a query to the model, using distinct user identifiers to ensure that the correct user's information is being referenced.
The risk of getting responses that might be harmful for the data subject will be overcome by designing our model prompts to avoid giving harmful responses by default, and we will be taking a stepwise approach to monitoring with both automated & manual checks. We disclaim clearly that the chatbot may make mistakes and to report these to the Mojo team via support channels.
As above, our Platform should not be used as a replacement for a qualified medical professional.
If you have any questions about our Privacy Policy or information practices, please feel free to contact us at our designated request address: support@mojo.so. Alternatively, please contact our Data Protection Officer:
Aphaia Ltd
Eagle House
163 City Road,
London
EC1V 1NR
Telephone: +44 20 3917 4158
Email address: dpo@aphaia.co.uk
We have also designated a data protection representative in the EU. If you need to contact them, you can use the contact details below:
Data Protection Representative Europe, S.L
Paseo de la Castellana, 194
28046 Madrid, Spain
Email address: contact@dprep.eu