The 3rd Advanced AI Utilization Advisory Board
- Last Updated:
Overview
Date and
March 10, 2026 (Tuesday) from 9:30 a.m. to 11:30 a.m.
US>
Office Room / Online
Agenda
- 1. Opening
- 2. Agenda
- Outline of Regular Report on generative AI System by Ministries and Agencies
- Trends in generative AI in Japan and Other Countries
- Draft Revision to Enhance the Guidelines for Procurement and Utilization of generative AI for Advancing and InNovating Administration
- 3) Closing
Material
- Proceedings (PDF/56KB)
- Appendix 1: Ministry and Agency generative AI System Periodic Report Overview (PDF / 1,109 kb)
- Reference 2: Trends in generative AI in Japan and Other Countries (PDF / 5,065 kb)
- Document 3 Revised Draft for Enhancement of Guidelines for Procurement and Utilization of generative AI for Advancement and Renovation of Public Administration (PDF / 7,893 kb)
- Proceedings (PDF/766KB)
References
- Reference Material 1: Draft revision of the "Guidelines for the Procurement and Utilization of generative AI for Advancing and InNovating Public Administration" (PDF / 4,536 kb)
- Reference Material 2 Reference Sheet for Intellectual Property Rights, etc. Measures (Draft) (Private)
- Reference Material 3 Points of Attention When Creating Terms of Use of the generative AI System for Citizens (Draft) (Private)
Attendees
- (1) Members
- Chairman Kadobayashi, Member Okada, Member Kitamura, Member Shibayama, Member Torisawa, Member Naganuma, Member Nabatame, Member Yoshinaga
- (2) Digital Agency
- Matsumoto Minister for Digital Transformation, Kawasaki Parliamentary Vice-Minister for Digital Transformation, Asanuma Consultant, Ito Consultant, Misumi Vice-Minister for Digital Transformation, Chief Officer of Digital Agency, Mr. Tomiyasu, Vice-Minister for Digital Policy, Ogiwara Director-General, Hasui Director-General, Morita Sokatsu Deputy Director-General, Ibata Deputy Director-General, Okuda Deputy Director-General, Kitama Deputy Director-General, Naito Director, Hashimoto Director, Yamaguchi Director
Summary of the proceedings
At the beginning, Mr. Parliamentary Vice-Minister for Digital Transformation Kawasaki made a statement on the significance of the Council enhancing the guidelines based on the "Basic Plan for AI" in light of the increasing use of AI agents with the advent of robotics and stablecoins.
(1) Outline of the Regular Report on the generative AI System by Each Ministry and Agency
The secretariat reported the results of an analysis of the status of utilization of generative AI by each ministry and agency using Reference 1.
The main questions and opinions of the participants are as follows.
Member Nabatame: , I understand that efforts are being put into practical use by each ministry and agency, and qualitative and quantitative results are visible. It is considered that some of them can be organized as quantitative performance targets. From a medium - to long-term perspective, I think that efforts to share with the public what results have been achieved by the introduction of the AI by each ministry and agency are effective. It would be good if the efforts contribute to efficient business operation of each ministry and agency. Private companies have also confirmed that they have been able to reduce their business hours by introducing the AI, but sometimes the time they have saved is absorbed within the organization. As a result, there are cases where the actual business situation does not change even if they simply work overtime until 5 pm instead of 8 pm. I think that the issues to be visualized in this way are important for the examination of business efficiency at each ministry and agency.
Secretariat: The performance targets, we have just started promoting AI utilization from this fiscal year, and we have set indicators while looking at how much operational efficiency can be achieved. We will continue to pay close attention to how actual utilization will progress and how the performance targets can be achieved. We have also observed cases in which AI utilization does not reduce operational time, the same as in the private sector. Since there may be situations that cannot be solved by government offices alone, we would like to refer to good management schemes in the private sector.
Secretariat: The AI and how to present them. In terms of promoting its use, it is important to widely share the benefits that can be seen from objective indicators. We will continue to consider how to set indicators and how to present performance targets, keeping that in mind.
Yoshinaga: AI must bring benefits to government officials, such as operational efficiencies, I think it is meaningful and a good initiative to visualize the results. In addition, risks differ depending on the purpose of use. For example, systems for the public are particularly at risk. In the future, when generative AI is used for policy planning and formation within each ministry and agency, it is necessary to be careful not to make decisions based on wrong information. I think it is necessary to collect information on the purpose of use in the future. I would like to ask you to show the details of the purpose of use, including whether it is merely information collection or is used for policy planning.
Secretariat: The , and we would like to continue to organize how it can be classified and share it with each ministry and agency.
Member SHIBAYAMA: Document 1, it is written that the use of AI referring to Confidentiality class-2 information is increasing. I think this is solely the result of Digital Agency's efforts. Regarding such use, for example, in cases where government officials want to use AI for meeting minutes, is there a mechanism or system to respond to inquiries if Digital Agency receives inquiries or inquiries such as whether there are precedents, what kind of services they use and under what conditions? The other day, when I personally received a consultation from a national corporation, they said that it would be reassuring if there were specific precedents. Is there a mechanism to share the presence or absence of specific precedents and the contents within the government?
Secretariat: The Digital Agency Secretariat, and a mechanism for linking it with the departments in charge at the various ministries and agencies has been put into operation. Although the accumulation of use cases is still in progress, it is possible at this stage to share examples of utilization according to the content of consultations.
Member SHIBAYAMA: Do you respond to inquiries about specific information such as the services you use to a certain extent?
Secretariat: The You are right. Depending on the project, we may conduct hearings on the details of the project and collect specific information through the hearings.
Member: is coarse grained, and I do not have an image of what measures should be taken in the future. It may be difficult to collect more detailed information because it would be a burden on each ministry and agency, but I think it would be good to collect at least a summary of information on the purpose of use and what kind of system it is. Since the promotion of use by each ministry and agency was understood, I think it would be good to move away from the stage of counting only the number and to the stage of analysis.
Secretariat: The Since the necessary information will change depending on the situation, we would like to consider the content of the analysis based on the needs of each ministry and agency. In particular, we would like to continue to consider information such as the purpose of use and what kind of system or model is used.
(2) Trends in generative AI in Japan and other countries
The trend of generative AI in Japan and other countries was reported by the secretariat using Reference 2.
The main questions and opinions of the participants are as follows.
Member Nabatame: AISI states that visualization (observability) and containment (controllability) are important, which is correct. I wonder who exactly implements visualization and how. The Singaporean "Model AI Governance Framework for Agent-based AI" states that humans should be responsible, and I think it is important how humans carry out the process of visualization. I would like to ask what kind of framework Digital Agency is thinking about and what efforts are being made by each ministry and agency.
Secretariat: The , the "AI Incident Response Approach Book" has a strong intention of visualization in terms of security, and the guidelines also state in Material 3 that verification through logs, etc. is possible. With regard to accountability, there is a description of human-centered in AI Guideline for Business, and there is also a description of accountability in the Guidelines for Use and Application of AI Procurement. It has been made clear that the administration must be primarily accountable. Regarding the visualization of the "AI Incident Response Approach Book," did Member Kitamura provide any additional information?
Kitamura: Secretariat, the framework of the "AI Incident Response Approach Book" focuses on cybersecurity first. It is called an isolated zone, but if an incident occurs while using AI agents, etc., it is important to firmly isolate the AI and continue to use it safely and securely. AISI will also release a practical guidebook on CAIO by the end of this month. It is important to consider how to update the regulations and guidelines once they are created based on future trends.
Yoshinaga: In addition to the Singaporean Agentic AI Model AI Governance Framework, the OECD has also published a report on the definitions of AI Agent/Agentic AI. Please refer to this report as well as the US NIST report on efforts to develop technical standards for AI agents.
Secretariat: The The OECD definition of agentic AI is also referred to in the revised draft of AI Guideline for Business, but it is provisional because it is still under consideration. When AI Guideline for Business is finalized, it is being considered to refer to that definition in explanatory notes, etc.
Member Okada:
Secretariat: The AI itself is becoming a purpose. I think it is necessary to consider not only the utilization ratio but also the purpose of using the AI and what can be achieved by using it. Although this tendency can be seen from IT failure cases, there were cases in which the use of IT itself became a purpose. I think that "chance orientation" is also related to such a perspective, so I would like to continue to consider it while thoroughly advancing the survey.
Chairman Kadobayashi: Since there is room for deepening the purpose of use and observability of AI through the discussion of Agenda 1 and 2, we would like to continue to look at best practices among industry, government, and academia and consider ways to grasp the current situation.
(3) Proposed Revisions to Enhance the Guidelines for Procurement and Utilization of generative AI for Advancing and InNovating Public Administration
The Secretariat reported on the proposed revisions to enhance the guidelines in Document 3.
The main questions and opinions of the participants are as follows.
Member NAGANUMA: There are similar issues in the private sector in : I explained the results of interviews with Keidanren-related companies at the 2nd Advisory Board meeting, and I would like to make four points regarding the revised draft. First, regarding the target AI of the guidelines, I agree with the direction of expanding the scope to include generative AI with audio and image output. I think that it is in line with the actual situation of utilization by both the public and private sectors, as the use of the guidelines in business is progressing. Second, the risk judgment logic was updated this time, but it is based on the concept of security design and is evaluated to be consistent with the practical sense of private sectors. On the other hand, I am concerned that if the determination of significant impacts is left too much to individual CAiOS in the operation stage, it will vary among ministries and agencies and projects. Third, it is considered that the enhancement of examples of measures, examples of supporting information, and benchmark examples in the procurement check sheet will lead to the improvement of transparency and accountability. Although it is described as "as much as possible" in the examples of measures, I would like you to discuss the internal of the training data and model, security, etc. so that they are realistic requirements from the perspective of competition. Among them, I would be grateful if you could discuss the part related to intellectual property rights, etc. in particular. In addition, as a result of the previous questionnaire, it was said that detailed checks will be conducted for each individual project, but regarding certain requirements, I think that private companies will be able to participate by showing the combination with existing mechanisms such as external certification. Fourth, under the current operation, it is necessary to cross-reference multiple guidelines. Although it is considered difficult to unify them, I think that the private sector will be able to prioritize and refer to them by attaching cross-references.
Secretariat: The Regarding the second point, risk logic, there are still few high-risk cases. It is assumed that the number will increase if the operation progresses in the future, but the details based on specific examples will be discussed in the future. We receive consultations from the ministries and agencies at the AI consultation window, and in operation, we are trying to prevent the difference in judgment between the ministries and agencies from becoming too large. As content related to the third point, regarding the learning method, there is a description of observability on page 24 of Document 3, but at least it describes what you should know when using AI, such as "acquiring information such as key performance indicator information." On that basis, it is stated that "within a reasonable scope for analyzing risks and considering risk responses," and we are trying not to make requests related to trade secrets in generative AI development to the extent necessary. We will consider whether the requirements of the procurement check sheet are excessive, based on the results of future public comment. We have decided not to utilize external certification at this time, but we will consider it while watching the actual introduction status. Regarding the fourth point, we are incorporating necessary content from the perspective of procurement and utilization into the guidelines, while cross-referencing the guidelines of the related government and AISI. Related guidelines are described in the procurement check sheet, so that at least it is possible to comprehensively grasp the situation in terms of procurement and utilization.
Yoshinaga: I have a question about the revised part of "6.5 Matters to be Handled by Providers of generative AI Systems in the Government" on page 14. Regarding the description, "In a generative AI system that allows users to create AI agents, etc. that can execute advanced tasks, if a task is created that executes the task without judging the appropriateness of the AI results, it is required to report it to the provider or the Chief Information Officer (CAIO)," I think that it will be written in the terms of use that it will not be used beyond the scope of use provided by the AI system provider. What specific events are assumed?
Secretariat: The AI agent, and can automatically perform some actions. If app linkage is set, e-mail replies, etc. are automatically performed, and in some cases, replies may be made without human intervention. Major services are able to use such services. It is a realistic problem to what extent this can be used within the government ministries and agencies, and the risk will be high if it falls out of the "human in the loop". Therefore, when such a system can be provided, it is supposed to be reported and risk determined in advance.
Yoshinaga: We are concerned about whether it is necessary to report to the system provider as well. We believe that reporting everything to the CAIO would be a burden on the CAIO. We would like to confirm whether it is described in this way from the perspective of risk management.
Secretariat: The provider is optional. However, even if it is reported to the provider, it is to be reported to CAIO.
Chairman Kadobayashi: The secretariat will consider the relevant parts, including improvements to the writing style.
Member: You said that you would improve the procurement check sheet through public comments, but since there are not many business operators developing domestic LLMs, I think it would be better to have direct discussions with business operators. There is a concern that public comments alone may not capture all the nuances. In addition, it would be better to discuss whether the requirements of the check items in the procurement check sheet can actually be met. Seven domestic LLMs were selected by Gennai this time, and I think it would be good to consider consulting with business operators in that regard.
Secretariat: The Secretariat plans to exchange opinions with business operators in parallel with public comments.
Chairman Kadobayashi: I am also concerned about this area. I understand that close coordination with creators such as LLM and AI agents is necessary. Do you mean that the secretariat can arrange such a meeting separately?
Secretariat: The I would like to work on that.
Member Nabatame: Naganuma also emphasized the revision of intellectual property rights, etc., and I agree with that and agree with the expansion of clear descriptions on this. In the United States, lawsuits related to copyrights are common, but the most common case is infringement of copyrights. Based on that, I understand that the description on the protection of AI infringement is based on the idea of not infringing on other people's intellectual property rights, etc. I think that there are cases where each government agency holds intellectual property rights, etc., but there are also cases where third parties access government data with malicious intent and infringe on copyrights. I would like to ask what kind of measures and alerts have been taken against these cases.
Secretariat: The Government on its website, etc. is provided as open data, etc. Rules on open data have been established since before generative AI. It is assumed that when the AI system accesses open data, it is basically within the scope of the rules already established. In principle, when using data published by ministries and agencies, there is no problem as long as the source is stated.
Member Nabatame: We understand that the existing regulations can cover it. On top of that, generative AI may use it maliciously, such as generating and altering fakes. Therefore, it would be good to consider issuing a certain warning about the use of government data for generative AI.
Secretariat: The Government.
Kitamura: AI are changing rapidly, and there is a limit to the conventional method of revising the regulations and guidelines every six months to one year. For this reason, it is important to adopt a method that can update the contents more flexibly, such as a so-called release note, as a mechanism to announce changes and updates as needed, and to consider how to operate it in a way that leverages Japan's strengths.
Secretariat: The This time, the Guidelines also took into consideration the ease of updating the branches and leaves without changing the trunk. For example, the detailed examples of measures are still considered as reference information at this point, but they may be separated from the Guidelines in the future. In addition to the Points to Consider (1) (4) of the revised draft, we have added a statement regarding the call for attention to the CAIO, and in practice, CAIO liaison meetings have been held. We would like to share operational information that the Guidelines alone cannot provide, not only by revising the Guidelines themselves, but also by using such CAIO liaison meetings.
Kitamura: said. 2025 is called the Year of AI Agents, and 2026 is called the Year of Physical AI. In the United States, it is expected that a systematic approach related to AI agents will emerge in the first and second quarters with a view to project demonstration, so it is necessary to pay close attention to developments. In addition, I would like to share the situation when I visited German research institutes and universities last year as reference information for continuous trend surveys to understand the meaning of "chance orientation" in the German "Guidelines for the Use of Artificial Intelligence in Federal Administration" in Reference 2. In the exchange of opinions during the visit, it was shown that Germans would strengthen industrial competitiveness through the use of digital transformation and AI, leading to the improvement of national power. In addition, they also mentioned the significance of initiatives that utilize their own strengths such as quality assurance. Quality assurance is also a strength for Japan, so I think that German trends will be a reference for future consideration.
Secretariat: The We would like to refer to the German example for future reference.
Member Okada: These guidelines are based on the idea of how to promote the utilization of generative AI on the premise that generative AI is not in use in the first place. Therefore, it is assumed that humans will read these guidelines and humans will use them after understanding them. As such a premise changes in the future, I think that the flow will be that these guidelines will be read by generative AI and utilized. There may be cases where the ministries and agencies subject to the guidelines will also make use of the guidelines by having Okinawa read the contents and asking questions. However, new risks may be created by changing the use of the guidelines. What kind of consideration and discussion is being advanced on this?
Secretariat: The Based on the current procurement check sheet, "Gennai" can be used to check whether the procurement specifications are met. The current guidelines can also be used more appropriately by generative AI. On the other hand, it is a matter of course that the risk of hallucination cannot be reduced to zero when using generative AI. Therefore, it is essential to confirm it with human eyes. In the future, we will continue to be aware of machine readability and provide such usage in a friendly manner.
Member Okada: At present, it is clear that people are responsible for the output results. However, it is possible that the responsibility will become unclear in the future. Therefore, the Government of Japan requests the Government of the United States to continue the discussion.
Kitamura: is rapid, and generative AI is moving from the "introduction stage" to the "stage where it is widely used in society as a whole." At the same time, the standardization of technology is also progressing. In light of these changes in the phases of AI use, it is not enough to respond based on the conventional introduction stage, and it is important to consider approaches suitable for new phases based on widespread use.
Member NAGANUMA: There are similar issues in the private sector in , but there are concerns that the work of CAIO will become enormous. Discussions on the governance of AI agents are progressing in global think tanks, etc., but it is recognized that the method of risk control, etc. will change from AI governance in the past. As what CAiOS must see changes, will efforts to change the awareness of CAiOS be considered within the government ministries and agencies in the future?
Secretariat: The CAIO Liaison Committee has been sharing the contents of the revised guidelines and knowledge. CAIO participated in today's hearing, and CAIO and the departments that support CAIO have been playing a role in catching up by participating in the Advisory Board. The governance of AI agents should not only be reported within the government, but we would like to refer to other countries and the private sector for the ideal form of discipline. We would also like to consider deploying it to ministries and agencies as appropriate, based on the best practices of ministries and agencies that are promoting the use of AI agents.
Kitamura: AISI is accelerating its research project on AI agents, and plans to publish an evaluation perspective guide on AI agents next year.
Parliamentary Vice-Minister for Digital Transformation Kawasaki: This is common to Materials 1 and 2 of , but there is a concern that the operation in which people see the risks of generative AI will become enormous and the operation will become difficult. In customizing and deploying "Gennai" to each ministry and agency, it may be necessary to increase the granularity of description in the guidelines in accordance with the actual usage of each ministry and agency. It is also considered that it will be a burden for people to respond to all the rules of the guidelines in the future. For example, regarding the matter of "not entering Information designated confidential information in the prompt" in the template of Rules for Use, it is necessary to consider implementing prompt regulations in the system from the beginning. Digital Agency needs to take advanced measures.
Member: In the future, it may be possible to create the summary of the periodic report in Document 1 automatically. I think it would be a good idea to consider such a direction.
Chairman Kadobayashi: In order to improve explainability and readability, based on the discussion at the plenary session, it should be considered in the next fiscal year or later.
Member SHIBAYAMA: It would be appreciated if the content of the comments made in advance were also reflected in the procurement check sheet and the contract check sheet. The amount of content in the guidelines is enormous, and I think it is difficult to carry out on-site checks. Since past examples are considered to be the most helpful in such checks, it would be good if the horizontal development of use cases and the information sharing system could be further improved.
Chairman Kadobayashi: This is the end of the exchange of views. If there are any points that need to be revised as a result of today's discussion, I, as the chairman, will be entrusted with the revision, while consulting with Digital Agency and individual members as necessary. If there are any points that need to be revised, I will make the revisions and report the details of the revisions to you.
At the closing of the meeting, Minister for Digital Transformation Matumoto stated that in order to make Japan the easiest country in the world to utilize AI, the government would like to move ahead at full speed with the revision of the guidelines and the promotion of "Gennai" use and application at each ministry and agency in order to achieve balanced AI use and application.