2 min read

News Roundup - 18/11/24

News Roundup - 18/11/24
Breaking! AI is still pretty questionable at spelling within images.

Here are some stories and articles we followed in the last week.

Canada launches AI Safety Institute

Canada has launched the Canadian Artificial Intelligence Safety Institute (CAISI) to enhance the country’s ability to manage AI safety risks. This initiative is part of a broader $2.4 billion investment aimed at promoting the safe and responsible development of AI technologies. CAISI will leverage Canada’s strong AI research community and collaborate internationally to address risks such as disinformation, cybersecurity breaches, and election interference.

The institute will operate with an initial budget of $50 million over five years and will be housed at Innovation, Science and Economic Development Canada. It will work closely with the National Research Council of Canada and the Canadian Institute for Advanced Research (CIFAR) to conduct both applied and investigator-led research. This effort is part of Canada’s broader strategy, which includes the proposed Artificial Intelligence and Data Act and the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems.

CAISI will also interface with various AI hubs located around the country, including:

For more information, see the press release

The need for more engagement on AI Policy

Several good points are made in this article from the Philantropist and authored by Katie Gibson.

We need a diverse set of viewpoints when discussing AI policy, and not all of them have to be technical viewpoints. From the article:

To make your voice heard in the AI policy process, you don’t need a comprehensive understanding of the technology or the proposed policy instruments. Those change, sometimes daily. What you do need is an appreciation of the key tensions at the heart of AI policy-making in Canada. Those affect the entire policy-making process – whether as text or subtext. Understanding these tensions gives you the solid footing to engage. Below are some of the tensions to be aware of, although a complete accounting would of course require a much longer discussion.

We encourage you to take a read to understand the current battles in the AI Policy landscape.

LLM controlled robots, what could go wrong?

Jailbreaking your friendly, garden-variety, bomb-carrying robot dog discusses the vulnerabilities of AI-controlled robots to jailbreaking attacks. It highlights the RoboPAIR framework, which was able to successfully jailbreak three types of robots.

Key points include:

  • Security Risks: The ability to jailbreak these robots underscores significant security risks in AI systems.
  • Framework Effectiveness: RoboPAIR’s success in jailbreaking different types of robots demonstrates the need for robust security measures.
  • Implications for Trust: These vulnerabilities can erode public trust in AI technologies, emphasizing the importance of developing secure and trustworthy AI systems.

This article underscores the critical need for enhanced security protocols to maintain digital trust in AI applications.