You're about to chat with an exciting chatbot! Before you begin, please read these quick instructions.
In order to give the chatbot a chance to get to know you, please write at least SIX lines before ending your session.
During chat, you can rate responses using "upvote/downvote" buttons next to each line the chatbot sends. Please use them to let the bot know how it's doing!
During the chat, if you say a rare word or make a spelling mistake, the bot will see your word as "<unknown>".
It may also use "<unknown>" when talking about words outside its vocabulary.
These bots are trained on movie dialogs and Reddit data, and sometimes say inappropriate things. Chatter discretion is advised.
When you are done chatting, please rate your chat experience by pressing the button in the top right of the screen.
Thank you for participating!
You are asked to participate in a research study conducted by
<Researcher names have been redacted for anonymity during the review process. For questions about the study, please email email@example.com>
. The results of this study will contribute to a Ph.D. dissertation. You were selected as a possible participant in this study because you have internet access. You should read the information below before deciding whether or not to participate.
The purpose of this study is to show that it is possible to improve a conversational AI by making it sensitive to the emotional tone of the responses it receives from humans that chat with it. Specifically, as the model chats with a person, it will detect whether they are responding positively or negatively, and try to optimize its conversation so that it receives positive responses. We hope to show that making the AI learn from implicit human feedback not only makes it more intelligent, but encourages it to take actions that are more likely to make people happy.
Your participation in this study is completely voluntary and you are free to choose whether to be in it or not. If you choose to be in this study, you may subsequently withdraw from it at any time without penalty or consequences of any kind. The investigator may withdraw you from this research if circumstances arise which warrant doing so.
If you volunteer to participate in this study, you will be taken to a website with a chat interface, in which you can interact with the AI. As you type your responses, we will use automatic sentiment analysis software to detect whether you are responding positively or negatively. The model will be trained to learn to say things that lead to more positive responses. After finishing your conversation, you may also be asked to rate the quality of the conversation on a numerical scale.
The text that you type into the website will be stored on a secure server and used to train the AI. Please do not type anything that you do not wish to be stored and used in training the model.
In addition to being able to interact with an AI in the same way that you would chat with your friends, you will help to train the AI model to learn to converse in a way that makes you happy. In addition, you will help to establish the effectiveness of an important new research direction in the AI community: learning human preferences from implicit social feedback. This research direction could lead to a number of AI models being trained to be sensitive to social feedback, which could potentially make them smarter and more pleasant to interact with.
You will not be compensated for participating.
Any information that is obtained in connection with this study and that can be identified with you will remain confidential and will be disclosed only with your permission or as required by law. In addition, your information may be reviewed by authorized MIT representatives to ensure compliance with MIT policies and procedures.No information about your identity will be collected by the website, however your conversations and rating will be stored. These will be kept on a secure, password-protected server.
If you have any questions or concerns about the research, please feel free to contact Natasha Jaques (425-463-9162, firstname.lastname@example.org), Asma Ghandeharioun (email@example.com), Judy Hanwen Shen (firstname.lastname@example.org), Craig Ferguson (email@example.com), Noah Jones (firstname.lastname@example.org), Agata Lapedriza Garcia (email@example.com), or Rosalind Picard (617-253-0369, firstname.lastname@example.org).
If you feel you have suffered an injury, which may include emotional trauma, as a result of participating in this study, please contact the person in charge of the study as soon as possible.
In the event you suffer such an injury, M.I.T. may provide itself, or arrange for the provision of, emergency transport or medical treatment, including emergency treatment and follow-up care, as needed, or reimbursement for such medical services. M.I.T. does not provide any other form of compensation for injury. In any case, neither the offer to provide medical assistance, nor the actual provision of medical services shall be considered an admission of fault or acceptance of liability. Questions regarding this policy may be directed to MIT’s Insurance Office, (617) 253-2823. Your insurance carrier may be billed for the cost of emergency transport or medical treatment, if such services are determined not to be directly related to your participation in this study.
You are not waiving any legal claims, rights or remedies because of your participation in this research study. If you feel you have been treated unfairly, or you have questions regarding your rights as a research subject, you may contact the Chairman of the Committee on the Use of Humans as Experimental Subjects, M.I.T., Room E25-143B, 77 Massachusetts Ave, Cambridge, MA 02139, phone 1-617-253 6787.
Please confirm you agree to these conditions and that you are between 18 - 64 years of age.