Both the US and the EU have mandated a risk-based approach to AI development. Whatever your risk level, ultimately it’s all about transparency and security. Credit: Shutterstock/Sansoen Saengsakaorat As organizations of all sizes and sectors race to develop, deploy or buy AI and LLM-based products and services, what are the things they should be thinking about from a regulatory perspective? And if you’re a software developer, what do you need to know? The regulatory approaches of the EU and US have, between them, firmed-up some of the more confusing areas. In the US, we’ve seen a new requirement that all US federal agencies have a chief AI officer and submit annual reports identifying all AI systems in use, any risks associated with them, and how they plan to mitigate those risks. This echoes the EU’s requirements for similar risk, testing, and oversight before deployment in high-risk cases. Both have adopted a risk-based approach, with the EU specifically identifying the importance of “Security by design and by default” for “High-risk AI systems.” In the US, the CISA states that “Software must be secure by design, and Artificial Intelligence is no exception.” This is likely to be music to the ears of anyone familiar with proactive security. The more we do to reduce the friction between machine logic and human analysis, the more we can anticipate threats and mitigate them before they become a problem. Code is at the core At a fundamental level, “Security by design, by default” begins with the software developer and the code being used to build AI models and applications. As AI development and regulation expand, the role of developers will evolve to include security as part of daily life. If you can’t code, you can’t use AI. And if you’re a developer using AI, you’re going to be keeping a closer eye than ever on weaknesses and security. Hallucinated or deliberately poisoned software packages and libraries are already emerging as a very real threat. Software supply chain attacks that start life as malware on developer workstations could have significant, serious consequences for the security and integrity of data models, training sets, or even final products. It’s worth noting that the malicious submissions that have dogged code repositories for years are already emerging on AI development platforms. With reports of massive volumes of data being exchanged between AI and machine learning environments such as Hugging Face (aka “The GitHub of AI”) and enterprise applications, baking security in from the outset has never been more critical. Article 15 of the EU’s AI Act seeks to preempt this scenario by mandating measures to test, mitigate, and control risks including data or model poisoning. New guidance issued by the US includes any government-owned AI models, code, and data being made publicly available, unless they pose operational risk. With code simultaneously under scrutiny and under attack, organizations developing and deploying AI will need to get a firm handle on the weaknesses and risks in everything from AI libraries to devices. Proactive security will be at the heart of driving security by design, as regulations increasingly require an ability to find weaknesses before they become vulnerabilities. Innovation meets risk meets reality For many organizations, the risk will vary depending on the data they use. For example, healthcare companies will need to ensure that data privacy, security, and integrity are maintained across all outputs. Financial services companies will be looking to balance benefits such as predictive monitoring against regulatory concerns for privacy and fairness. Both the EU and US regulatory approaches place a heavy emphasis on privacy, protection of fundamental rights, and transparency. From a product development perspective, the requirements depend on the type of application: Unacceptable risk: Systems that are considered a threat to humans will be banned, including government-run social scoring, biometric identification/categorization of people, and facial recognition. Some exceptions for law enforcement. High risk: Systems with capacity to negatively impact safety or fundamental rights. These fall into two categories: AI systems to be used in products that fall under EU product safety legislation including aviation, automotive, medical devices, and elevators. AI systems that will be used in areas including critical infrastructure, education, employment, essential services, law enforcement, border control, or application of the law. Low risk: Most current AI applications and services fall into this category, and will be unregulated. These include AI-enabled games, spam filters, basic language models for grammar-checking apps, etc. Overall, under the EU AI Act, applications like ChatGPT aren’t considered high risk (yet!) but they will have to ensure transparency around the use of AI, avoiding the generation of illegal content and the undisclosed use of copyrighted data in training models. Models with the capacity to pose systemic risk will be obliged to test prior to release—and to report any incidents. For products, services, and applications in the US, the overarching approach is also risk-based—with a heavy emphasis on self-regulation. The bottom line is that restrictions increase with each level. To comply with the EU AI Act, before any high-risk deployment, developers will have to pass muster with a range of requirements including risk management, testing, data governance, human oversight, transparency, and cybersecurity. If you’re in the lower risk categories, it’s all about transparency and security. Proactive security: Where machine learning meets human intelligence Whether you’re looking at the EU AI Act, the US AI regulations, or NIST 2.0, ultimately everything comes back to proactive security, and finding the weaknesses before they metastasize into large-scale problems. A lot of that is going to start with code. If the developer misses something, or downloads a malicious or weak AI library, sooner or later that will manifest in a problem further up the supply chain. If anything, the new AI regulations have underlined the criticality of the issue—and the urgency of the challenges we face. Now is a good time to break things down and get back to the core principles of security by design. Ram Movva is the chairman and chief executive officer of Securin Inc. Aviral Verma leads the Research and Threat Intelligence team at Securin. — Generative AI Insights provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss the challenges and opportunities of generative artificial intelligence. The selection is wide-ranging, from technology deep dives to case studies to expert opinion, but also subjective, based on our judgment of which topics and treatments will best serve InfoWorld’s technically sophisticated audience. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Contact doug_dineley@foundryco.com. Related content news WSO2 API managers manage AI APIs WSO2 API Manager and WSO2 API Platform for Kubernetes now allow developers to manage AI services as APIs, with support for OpenAI, Mistral A,I and Microsoft Azure OpenAI. By Paul Krill Nov 05, 2024 3 mins Generative AI APIs Devops feature Making the business case for generative AI The potential for generative AI to deliver a significant return on investment is being demonstrated by early adopters across various industries. My company provides one example. By Manish Rai Nov 05, 2024 9 mins Generative AI Artificial Intelligence Software Development news Visual Studio Code previews AI-powered code editing Powered by GitHub Copilot, Copilot Edits proposes code changes across multiple files in a workspace based on the user’s prompts. By Paul Krill Nov 04, 2024 3 mins Visual Studio Code Integrated Development Environments Development Tools feature The machine learning certifications tech companies want Not all machine learning courses and certifications are equal. Here are five certifications that will help you get your foot in the door. By Bob Violino Nov 04, 2024 9 mins Certifications Machine Learning Software Development Resources Videos