Policy on the use of AI tools in classes

April 28, 2023

The University of Tokyo will not uniformly prohibit the use of natural language-generation AI tools, such as ChatGPT, in educational settings. While understanding the issues associated with such technology, we will actively explore its potential applications to education, research, and university administration, and we will continue to engage in dialogue about knowledge on its use and related precautions as well as its long-term impact.

However, in specific contexts, particularly in education, it may be appropriate to prohibit the use of such tools, depending on the educational objectives and goals in each situation. Policies regarding AI tools should be determined appropriately for each class and assignment; whatever policy is adopted in each specific case, it should be regarded as a component of the teaching methodology that the faculty involved believe to be most effective.

Education at the University of Tokyo has always relied on the initiative, individuality, and creativity of expert faculty members (and of faculty groups, such as departments and majors) to develop high-quality teaching methods. Similarly, decisions on whether to use or not use generative AI tools, or on how to use them, should be made by the faculty members themselves with the goal of maximizing educational effectiveness.

With that in mind, we summarize below the issues and our current thinking about generative AIs that we would like faculty to be aware of when designing and considering teaching methods.

[1] Try to make generative AI tools such as ChatGPT, Bing AI, and Bard do the assignments and exams you have given in the past.

Generative AIs have strengths, weaknesses, and limitations. They are said to be particularly good at writing short summaries and essays on specified topics. In educational settings, for example, they can effortlessly produce writing at a level that students would normally spend much time on.

Determine how well your assignments can be completed by generative AI tools like ChatGPT and incorporate this understanding into the design of your educational policies and methods.

Please note that, since exam questions are highly confidential documents, you should not, in principle, input them directly into generative AI tools.

[2] Clearly state your stance as instructor on the use of generative AIs for each class and assignment.

The issues mentioned above do not necessarily mean that you should avoid questions and tasks that can be easily answered using generative AIs. Rather, the tasks should be designed to maximize the educational effect (training effect) for the students. If using generative AIs to answer questions does not achieve the educational objectives, communicate this to the students and tell them to answer without using it. Instructors should decide for themselves, based on their own judgment and educational objectives, whether to allow or prohibit the use of generative AIs for their assignments, and they should clearly communicate their policies to their students.

If you do allow the use of generative AIs, please also explain to the students the issues that are currently known, including 1) the risk of leakage of personal and confidential information, 2) the increased collection of information by a small number of companies, 3) concerns about copyright infringement, and 4) the possibility of biases in the learned content. You are also recommended to have students specify which generative AI they used when submitting reports that utilized such a tool.

[3] Inform the students of the purpose of the task and the learning objectives. Emphasize the importance of the process of finding an answer rather than the final answer itself.

It is beneficial to explain to students the importance of thinking about the assignments on their own, of coming up with their own innovative solutions, and of experiences that cannot be obtained simply by getting answers from generative AIs (or from other people), even if such explanations may seem obvious. These principles have always been important in education, but they are particularly crucial if you prohibit the use of generative AIs for tasks where results could be easily obtained using such tools.

What we seek in education is not just the result, but what students learn in the process of obtaining that result. Explain this thoroughly to your students. Even when students are allowed to use generative AIs, simply copying and pasting the answers generated by the AI and submitting them should not be accepted, as that would provide no learning effect for the students. (Exceptions may include tasks that focus on improving the accuracy of the AI-generated answers.) Point out the possibility of errors in the answers generated by the generative AI, and emphasize the need for students to scrutinize the veracity of the output themselves.

Assuming the current level of generative AIs, we make the following additional suggestions:

[4] When possible, consider adopting tasks and questions that cannot be easily completed or answered by generative AIs.

Whether allowing or prohibiting the use of generative AIs, consider modifying the content and format of tasks to prevent answers from being easily obtained through simple AI use.

However, it is important to be careful not to lose track of the original educational objectives and learning goals of the tasks. For example, it would be counterproductive if the creation of questions that generative AIs cannot answer results in excessive difficulty or volume of tasks. Some examples of format adjustments that do not distort the objectives of tasks include:

  • Assign short tasks during class.
  • Emphasize the process leading to the answer.
  • Provide options for tasks (i.e., allow students to choose tasks they want to do).
  • Require students to indicate their information sources.

(Example reference: Eberly Center, Carnegie Mellon University, “AI Tools (ChatGPT) FAQ, 3. How can I design my assignments to facilitate students generating their own work?” https://www.cmu.edu/teaching/technology/aitools/index.html)

[5] Do not rely too heavily on tools that claim to detect whether a given piece of text was generated by AI.

There are tools now available that claim to detect whether a text was generated by generative AIs, but those tools should not be over-relied upon as the generators themselves are changing rapidly. The judgments of such tools are insufficient to use as evidence that a student has inappropriately used generative AIs.

[6] Impacts and effects that bring about educational benefits

While students writing reports used to have to go to the library to find information, it is now much more common and efficient to search the Internet using Google and other search engines. Generative AIs have the potential to gain users rapidly in a similar manner.

However, generative AIs only generate texts that their language models deem appropriate, so their reliability is more problematic than Internet search results (which also include errors). The principles used for Internet searches also apply to generative AI: do not blindly believe the AI’s responses; learn to assess the credibility of information yourself; and remember that you can obtain more credible information only by further research into primary sources.

With the advent of generative AIs, it becomes important to teach the basics of research again, including evaluating the credibility of information sources and citing and verifying sources. This is also a valuable teaching opportunity. Raise with your students the issues of the risks associated with relying solely on easy Internet searches and generative AIs for solving problems.

Generative AIs may offer new possibilities for higher education. For example, they can be used to support student learning. When writing a report, it is important for students to discuss with each other and with instructors, engaging in appropriate questioning and dialogue. The importance of human-to-human discussion and dialogue will not be lost in the future. However, when studying alone, students may find it useful to ask appropriate questions to generative AIs and engage in dialogue with them as a methodology for thinking and inquiry. For students who struggle with dialogue and asking questions in public, generative AIs may provide a useful platform for asking questions. They may also be used for personal brainstorming or as a sounding board when preparing presentations. These educational possibilities have just begun to emerge, so it is important to strive to improve teaching methods while carefully assessing the impact of any changes.

More specific methods include teaching students how to experiment with prompts—that is, questions and instructions—given to generative AIs to elicit the responses closest to the desired ones, or devoting class time to identifying mistakes and limitations in AI-generated responses. Be creative in finding new directions for higher education.


In the medium to long term, the challenge will be to determine whether and how to change educational content, methods, and evaluation methods based on the assumption that generative AIs can enable the collection of useful information and more efficient work (while acknowledging that they may still be inaccurate or even entirely wrong at times).

This challenge will be addressed through dialogue among educators in every field, transcending academic boundaries. We plan to provide forums for university-wide discussions on this issue, and we hope to exchange ideas and situations with people from every academic discipline. We also plan to update this document when appropriate.

Kunihiro Ohta, Ph.D.
Executive Vice President of the University of Tokyo in charge of education and information technology

NOTE: The original draft of this document was created in Japanese by members of the Dokodemo Campus Project. The above English translation was done initially using ChatGPT (version GPT4), followed by human editing.

Scroll to Top
Support Desk