News Roundup - 25/11/24
Here are some stories and articles we followed in the last week.
Does "open-source" mean something different in the AI world?
Bruce Schneier's blog post discusses how the AI industry is attempting to redefine "open source AI" in ways that undermine its true meaning. The Open Source Initiative (OSI) has published a definition that allows for secret training data and development processes, which Schneier argues is contradictory to the principles of open source. He emphasizes that for neural networks, training data is akin to source code, and keeping it secret defeats the purpose of openness.
Schneier highlights that many so-called "open source" AI models, like LLAMA, are not truly open. He suggests that industry players are pushing for this diluted definition to maintain corporate secrecy while benefiting from the open source label. Schneier calls for a genuine public AI option, stressing that real open source AI is crucial for transparency and trust in digital systems.
Leaders declare (and agree) tech is good and bad
At COP29 in Baku, over 1,000 stakeholders managed to agree on the COP29 Declaration on Green Digital Action. It's truly heartwarming to see everyone patting themselves on the back for recognizing that digital technology can help with climate action.
The declaration is all about balancing the benefits of digital tools with their environmental impact. ITU Secretary-General Doreen Bogdan-Martin emphasized the importance of reducing the carbon footprint of digital technologies. Because, you know, acknowledging the problem is practically the same as solving it, right?
Health for your secrets?
This article from The Conversation highlights significant privacy and security risks associated with fitness apps, particularly their ability to reveal users' locations. Despite the benefits of tracking workouts and progress, these apps can inadvertently expose sensitive information, as seen when Strava's Global Heatmap revealed secret military bases.
The article underscores the inadequacy of current legal frameworks, such as the UK's Data Protection Act 2018, which haven't evolved to address the complexities of data shared through fitness apps. These apps are often classified as "low-risk" AI systems, subject only to basic product liability laws rather than stringent regulations applied to higher-risk technologies like medical devices.
To mitigate these risks, the article calls for updated laws and a dual responsibility approach: regulatory bodies must enhance data protection measures, and users need to be more vigilant about sharing personal information. This combination is essential for maintaining digital trust in an era of rapid technological advancement.
Member discussion