Move Fast, Break Laws: AI, Open Source and Devs (Part 3)

Steve Poole

Background

The software development landscape is rapidly changing, with legislation emerging as a key driver of industry trends. As our reliance on software and AI grows, so does our vulnerability to cybercrime, which is now a multi-trillion-dollar problem. This has caught the attention of regulators worldwide. 

Part 1 covered the background, what a software supply chain is and thoughts on AI and open source.

Part 2 explored how governments are working to create legislation and what the current status is.

Part 3 (this article) offers both a Software supply chain and an AI governance & compliance checklists for developers and executives to consider

Part 4 discusses cybersecurity and incident reporting requirements, examines geopolitical compliance and liability management, and wraps up the series.

There’s a lot to take in. I hope you’re sitting comfortably.

Accountability Cannot be Outsourced.

I am not a lawyer. This document is a technical view of the legislation and regulations being developed or repurposed.  It’s imperative to get your own legal assessment when deciding if these elements apply to your situation. Having said that, some aspects are shared.  The primary one is accountability.  There’s no dodging your responsibilities.  That means wherever you are in the software supply chain, you have responsibilities to those consuming your software and those using it.  Regulations collectively require organisations to assess, monitor, and manage third-party risks, and you’ll have to prove that you did the right thing at the right time.

Blaming others without proper due diligence and safeguards is not a valid defence!


Responsibilities and Compliance Checklist

The following sections list commercial tools. They are for example only and do not constitute recommendations or endorsements.

This next section has significant references to keeping documentation and records secure. It’s important to realise that this is both evidence and a critical asset. It’s evidence that your software (and AI) follows the rules, but it’s also a vital asset because it tells you what’s happening and what’s in the software you ship. When things go wrong, this data asset will be a primary source for quickly determining why – and you will need the speed.

AI Governance & Compliance

Classify AI risk levels and maintain documentation of training data and testing.

Start by assessing your AI system using a formal risk classification framework. The EU AI Act, for example, categorizes AI applications into unacceptable, high-risk, limited-risk, and minimal-risk tiers, based on their potential impact on rights and safety. Adopt a similar internal model tailored to your industry, such as using a risk matrix or adapting NIST’s AI Risk Management Framework (NIST AI RMF). For each AI model, maintain clear documentation on training data sources, preprocessing steps, data lineage, and test set composition. This supports reproducibility and helps identify blind spots or harmful correlations. Tools like Model Cards or Datasheets for Datasets offer great starting points for structured documentation.

Implement bias testing, transparency controls, and human oversight.

Bias testing should be integrated into both the training and deployment phases. Use libraries like Fairlearn or IBM’s AI Fairness 360 to test for statistical parity, disparate impact, and equal opportunity across demographic groups. For transparency, implement model explainability techniques such as SHAP or LIME to interpret decisions and expose reasoning, especially for high-stakes outputs. Consider open-sourcing model summaries or publishing governance reports to build public trust. Establish a human-in-the-loop (HITL) mechanism,  especially for decisions impacting individuals (e.g. hiring, lending, medical diagnosis), to ensure ethical judgment and intervene when the AI’s decision seems anomalous or harmful.

Log all AI decisions and risk assessments for regulatory audits.

Logging isn’t just for debugging. Now, it’s a regulatory safeguard. Implement detailed, immutable audit logs that capture every significant AI decision, including input, model version, output, confidence score, and human intervention. Use tools like MLflow or Weights & Biases to track model runs and lineage. For risk assessments, ensure each deployment or model update includes a structured evaluation stored in an internal GRC platform. This record can serve as evidence during regulatory inspections or internal reviews and can be paired with incident response plans to ensure traceability in case of harm or non-compliance.

Ensure external AI vendors comply with required documentation and risk mitigation frameworks.

If you’re integrating third-party AI services (e.g., vision APIs, LLMs, and analytics engines), treat them as part of your extended supply chain. Require vendors to provide documentation that matches your internal standards: proof of data provenance, fairness evaluations, security certifications (e.g. ISO/IEC 27001), and compliance with frameworks like the OECD AI Principles or EU AI Act. Build these requirements into procurement policies and conduct regular audits or assessments of vendor practices. If you use SaaS models or APIs, verify if vendors support transparency mechanisms like System Cards or responsible disclosure programs. Establish an exit strategy if a vendor fails to meet ethical or compliance standards.

Software Supply Chain Security

Generate and maintain SBOMs for all software releases.

A Software Bill of Materials (SBOM) is a critical artifact that lists all components, dependencies, and libraries in a software release. SBOMs enable faster vulnerability detection, streamline incident response, and are increasingly becoming a regulatory requirement (e.g., US Executive Order 14028). Use tools that produce SBOMS in Syft, CycloneDX, or SPDX format to automate SBOM generation at build time, integrating into CI/CD pipelines. Maintain versioned SBOMs alongside software artifacts and publish them internally (or externally when required) to support transparency, compliance, and rapid triage during widespread supply chain vulnerabilities like Log4Shell.

Treat SBOMs like the audit documents they are and be prepared to have many for any particular release. Each stage of the process will need an SBOM, as the eventual comprehensive uber SBOM does not yet exist. You need to keep each one and tie them together.

Use approved repositories and conduct automated security audits.

Restrict all software builds to pull dependencies only from vetted, policy-compliant artifact repositories. This prevents inadvertent use of malicious or outdated libraries. Additionally, implement continuous security audits using tools such as OWASP Dependency-Check, Trivy, or Snyk. These tools should be part of your CI/CD pipelines to flag issues early and enforce fail-fast mechanisms when detecting high-severity vulnerabilities. Use policy-as-code (e.g., Open Policy Agent) to automate enforcement of repository and audit standards across projects.

Ensure you understand the capabilities of the audit tools. You may need more than one to cover the tools , technologies and libraries being used or shipped.

Follow secure development standards (NIST SSDF, OWASP, ISO 27034).

Adhering to formal secure development frameworks builds long-term software assurance. The NIST Secure Software Development Framework (SSDF) provides a high-level guide to integrating security across the SDLC. Similarly, OWASP’s Secure Coding Practices and ISO/IEC 27034 offer guidance for aligning software security with enterprise governance. Use these standards to define secure coding checklists, enforce peer review criteria, and guide security training programs. Document your organization’s alignment to these standards and conduct regular maturity assessments to identify gaps and drive improvement initiatives.

Document supplier security attestations and vulnerability management processes.

Every third-party software supplier should provide a clear attestation of their security posture, ideally through standardized forms like the OpenSSF Supplier Declaration of Conformance or SLSA levels. Collect and store these attestations in a centralized, auditable repository, and map each supplier to the products or services they impact. In parallel, maintain an internal vulnerability management policy that defines how vulnerabilities are tracked, triaged, remediated, and disclosed (if applicable). Integrate this with ticketing systems and ensure audit trails exist for every high-severity finding and its resolution.

Implement software composition analysis (SCA) tools to detect and remediate vulnerabilities in dependencies.

Software Composition Analysis (SCA) tools scan your project’s dependencies to identify known vulnerabilities and licensing issues. Integrate 3rd party tools or open-source tools like OSS Review Toolkit (ORT) into your development workflow to automate alerts and remediation PRs. Most SCA tools support policy enforcement, allowing teams to block deployments with critical CVEs or disallowed licenses. To maximize impact, integrate SCA findings into developer dashboards and prioritize fixes based on exploitability and usage context. Make remediation progress part of your security OKRs or engineering KPIs to ensure accountability.

Like audit tools, SCA tools can have different scopes and levels of sophistication. Ensure you understand the capabilities and limits of the SCA tools you choose. Picking a less capable one may save money initially but may place you in a problematic situation if vulnerabilities are found externally via a published SBOM analysis by a 3rd party. Shipping code with no known vulnerabilities is a key goal of cybercrime regulations.

Next Time

Read part 4 to conclude this series. There’s much left to unpack, including cybersecurity and incident reporting requirements, geopolitical compliance, and liability management.

Total
0
Shares
Previous Post

Hitchhiker’s Guide to Java Performance

Next Post

JCON EUROPE 2025: Live Coding, Networking & Community-Highlights

Related Posts