Move Fast, Break Laws: AI, Open Source and Devs (Part 2)

Steve Poole

The software development landscape is rapidly changing, with legislation emerging as a key driver of industry trends. As our reliance on software and AI grows, so does our vulnerability to cybercrime, which is now a multi-trillion-dollar problem. This has caught the attention of regulators worldwide. 

This article explains the various regulatory efforts in play and summarises actions that developers and executives should consider as they get to grips with 2025 – the year of software legislation

Part 1 covered the background, what a software supply chain is and thoughts on AI and open source.

Part 2 (this article) explores how governments are working to create legislation and what the current status is.

Part 3 offers both a Software supply chain and an AI governance & compliance checklists for developers and executives to consider

Part 4 discusses cybersecurity and incident reporting requirements, examines geopolitical compliance and liability management, and wraps up the series.

There’s a lot to take in. I hope you’re sitting comfortably..

Accountability Cannot be Outsourced.

I am not a lawyer. This document is a technical view of the legislation and regulations being developed or repurposed.  It’s imperative to get your own legal assessment when deciding if these elements apply to your situation. Having said that, some aspects are shared.  The primary one is accountability.  There’s no dodging your responsibilities.  That means wherever you are in the software supply chain, you have responsibilities to those consuming your software and those using it.  Regulations collectively require organisations to assess, monitor, and manage third-party risks, and you’ll have to prove that you did the right thing at the right time.

Blaming others without proper due diligence and safeguards is not a valid defence!

Behind the scenes of regulatory development

Governments trying to create regulations in any field need experts to help develop an appropriate and helpful approach. Although there is minimal trust in the software industry to self-police, there is broad recognition that the expertise to do so is within the software industry.  

Government bodies, existing standards groups, industry groups, and others collaborate to create standards, frameworks, and regulations.  

At the highest level, there are several involved in this discussion. 

  • United Nations
  • International Organization for Standardization (ISO)
  • World Trade Organization (WTO)
  • Financial Action Task Force (FATF)
  • International Telecommunication Union (ITU)

The critical takeaway is the global nature of the responses. While laws and regulations in one country may differ regarding a particular element, common standards and approaches are at the root of the legislation.  

Currently (and crudely), the US focuses on cybersecurity, the EU on AI control, and China on data privacy. Other countries have their initiatives, but the general pattern is clear.

There is an informal distribution of work by world governments. Different ones focus on building standards for different elements of the efforts underway.  One country may formalise the standard by converting some or all of it into a law, while another may just include the standard itself as the requirement to comply with. 

The Brussels Effect 

This term  https://en.wikipedia.org/wiki/Brussels_effect refers to the impact large economic groups have on others.  In this case, it relates to the EU’s standards having a de facto effect on companies outside the EU.   There are other variants of the term, but the net is that regardless of your country, software use in your organisation is likely to require compiling with the sum of the legislation being developed. As multinational organisations need to comply with all the laws in all the countries they do business in, they naturally distil a union of these laws and regulations and ultimately require their suppliers to follow suit.  

The “Brussels Effect” obviously works outside the EU, so the net takeaway is that we will all have to deal with the sum of all the regulations from all the major economic blocks.  It’s simply a matter of time.


The laws, regulations and other instruments in play

At some point in a topic like this, there just have to be lists of government controls. Take time to read through the list to learn how organisations worldwide are approaching these challenges. It’s worth noting that few of the individuals involved are software engineers. Most involved see ‘software’ as a scary, magical, and now uncontrolled element. The general concept applied to software is that it is like any other manufacturing component and can be dealt with similarly.

There are people at all levels who understand software, but they are in the minority and struggle to be effective. As this disconnect is explored and corrected, many court cases will inevitably occur. The devil is in the details, and there are many details.

Hence, I advise focusing on the common elements and taking all reasonable steps to create a strong, robust software supply chain and software engineering culture. See later for the checklist

Regulation and Compliance for AI

European UnionThe EU AI Act (Regulation (EU) 2024/1689) is the first comprehensive legal framework for AI, taking a risk-based approach.
Risk Categories: AI systems are classified as unacceptable risk (banned), high risk (regulated), and limited or minimal risk.
Obligations for High-Risk AI: Providers must implement trustworthiness and safety measures, including rigorous risk management, testing, and data quality controls.
Conformity Assessment: High-risk AI requires an AI quality management system and compliance audits.
Liability Considerations: The pending AI Liability Directive may extend software liability for AI-related harm.
more information https://digital-strategy.ec.europa.eu/en/policies/european-ai-act
United StatesAI Governance: Multiple different guidelines, regulations and bills.
NIST AI Risk Management Framework: Voluntary but widely influential guidelines for AI risk mitigation.
Proposed Federal Legislation: Bills such as the Algorithmic Accountability Act aim to introduce impact assessments for AI.
Regulatory Oversight: The FTC and the AI Bill of Rights promote fairness and transparency.
State-Level Initiatives: NYC Local Law 144 mandates bias audits for automated hiring tools.
More Information. https://www.nist.gov/itl/ai-risk-management-framework
United KingdomEthical and Sector-Specific AI Oversight
No comprehensive AI Act, but sector-specific regulations apply.
Guidance-based approach with oversight from regulatory bodies.
Algorithmic Transparency Standards encourage disclosure for AI systems in public sector applications.
More Information: https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach
ChinaStrict AI Regulation
Algorithmic Recommendation Rules (2022) mandate government registration for AI algorithms.
Generative AI Regulations (2023) require compliance with state-approved values.
Transparency & Security Controls: Mandatory content moderation, bias mitigation, and human oversight.
More Information:  http://www.cac.gov.cn/2023-07-13/c_1694165100702412.htm

Regulation and Compliance for Software Supply Chain Security

United StatesExecutive Orders and Standards
Executive Order 14028 (2021) mandates secure development practices and Software Bill of Materials (SBOMs) for government procurement.
NIST Secure Software Development Framework (SSDF) outlines best practices for source code integrity and vulnerability management.
Cybersecurity Maturity Model Certification (CMMC) requires defense contractors to meet security benchmarks.
More Information: https://www.nist.gov/itl/executive-order-14028
European UnionCyber Resilience Act & NIS2 Directive
Cyber Resilience Act (CRA): Mandates secure-by-design principles, prohibits products with known vulnerabilities, and enforces post-release patching.
NIS2 Directive: Extends security requirements to more organizations, mandates supply chain security audits.
More Information: https://digital-strategy.ec.europa.eu/en/policies/cyber-resilience-act
United KingdomProduct and Infrastructure Security
Product Security and Telecommunications Infrastructure (PSTI) Act (2022) bans default passwords, requires vulnerability disclosure policies.
NIS Regulations Update expands supply chain security oversight to new sectors.
More Information: https://www.ncsc.gov.uk/news/product-security-act
ChinaCybersecurity Law and Supply Chain Controls
Multi-Level Protection Scheme (MLPS) mandates security testing and state-approved infrastructure for critical applications.
Cybersecurity Review Process requires foreign software vendors to pass national security audits.
More Information: http://en.mps.gov.cn/n2254314/index.html

Next Time

Read part 3 to understand the sorts of checklists and evaluations that developers and their executives have to consider around software supply chain matters and AI governance & compliance

Total
0
Shares
Previous Post

Move Fast, Break Laws: AI, Open Source and Devs (Part 1)

Next Post

From Reactive Streams to Virtual Threads

Related Posts