New International AI Safety Update Warns of Rapidly Evolving Risks - College of Information (INFO)

New International AI Safety Update Warns of Rapidly Evolving Risks

INFO Staff - November 5, 2025

INFO’s Lee J. Tiedrich helps lead global efforts to address rapidly evolving AI risks in safety, cybersecurity and policy

A digital rendering of a judge’s gavel made of glowing blue data points striking a map of the world, symbolizing global AI governance and regulation.

Photo licensed by Adobe Stock via Road Red Runner

A new (and first) “key update” to the January 2025 International AI Safety Report reveals that recent AI breakthroughs are increasing risks in cybersecurity and biosecurity. The First Key Update: Capabilities and Risk Implications, released October 15, 2025, highlights the challenges of monitoring and managing these rapidly evolving threats. The update also details how new training techniques have enabled AI systems to achieve complex problem-solving in mathematics, coding and science. These very capabilities, however, pose national and global security risks.

The annual International AI Safety Report, established in 2023 by 30 nations and bodies including the UN and EU, aims to build a global scientific consensus on AI risks. It is the world’s first comprehensive review of the latest science on the capabilities and risks of general-purpose AI systems. Written and advised by over 100 independent experts and led by Turing Award winner Yoshua Bengio, it represents the largest international collaboration on AI safety research to date. To keep pace with the field, the report now issues timely key updates alongside its main annual publication.

Lee J. Tiedrich

Guiding this critical work are global policy experts such as Lee J. Tiedrich, a Senior Adviser to the report and key updates. Tiedrich joined the University of Maryland (UMD) in 2025 as a fellow at UMD’s Artificial Intelligence Interdisciplinary Institute at Maryland (AIM), visiting professor of the practice at the College of Information (INFO), and co-director of the UMD Tech Policy Hub.

Tiedrech’s expertise bridges the gap between technical AI development, law and policy—informed by her background in engineering, law and policy. Tiedrech serves as a member of the expert group for the Global Partnership on Artificial Intelligence (GPAI/OECD), an international initiative with 44 member countries that promotes the responsible development and use of artificial intelligence, and co-chairs GPAI’s Responsible AI Strategy for the Environment Committee as well as Intellectual Property Committee. Previously, Tiedrech held roles as a senior AI advisor at the U.S. National Institute of Standards and Technology (NIST) and as a consultant for the Organisation for Economic Co-operation and Development (OECD), positioning her at the forefront of international AI safety, risk management, policy and governance.

Tiedrech’s role exemplifies the collaborative and multidisciplinary approach of the International AI Safety Report and key updates. Writers and advisors span the globe, coming from academia, policy, government, industry and civil society.

To stay up to date with International AI Safety Report publications and updates, visit their publications webpage. To get involved with UMD at the intersection of AI/tech education, research and policy, consider joining or following the UMD Tech Policy Hub.


The First Key Update: Capabilities and Risk Implications was written, advised and reviewed by International AI Safety Report contributors and published by the International AI Safety Report on October 15, 2025.