FDA Cybersecurity Compliance Self-Assessment ✅
Evaluate Your Device Against SPDF Submission Requirements
🔐 SPDF: The Core of FDA Cybersecurity Expectations
SPDF stands for
Secure Product Development Framework — a core concept introduced by the U.S. Food and Drug Administration (FDA) in its
2023 guidance on cybersecurity for medical devices.
"A Secure Product Development Framework (SPDF) is a set of processes that help to reduce the number and severity of vulnerabilities in products throughout the device lifecycle.”
SPDF is
not a commercial framework or third-party standard. It’s an
FDA-defined expectation for integrating cybersecurity into every stage of the software and system development lifecycle — from design and implementation to maintenance and decommissioning.
This checklist is built around SPDF because it is the
central organizing structure FDA uses to evaluate cybersecurity readiness. It covers Security Risk Management, Security Architecture, and Cybersecurity Testing — all required for compliance.
✅ A correctly implemented SPDF means you're meeting the FDA’s cybersecurity process expectations — but only if it includes every required activity and artifact.
The following checklist is based on the three core components of the Secure Product Development Framework (SPDF) as defined in the FDA’s 2023 Cybersecurity Guidance:
1. Security Risk Management (RM)
- Threat modeling
- Cybersecurity risk assessment
- Interoperability Considerations
- Third-party components and SBOM
- Unresolved anomaly assessment
- TPLC security risk management
2. Security Architecture (SA)
- Implementation of required security controls (e.g., auth, crypto, logging)
- Architecture views (Global, Multi-Patient Harm, Updateability, Use-Case)
3. Cybersecurity Testing (CT)
- Requirements verification
- Threat mitigation testing
- Vulnerability scanning and SCA
- Penetration testing
⚠️ Note: While the checklist content aligns fully with the FDA’s structure, the order of execution has been adjusted to follow a more practical and logical implementation flow. This ensures better usability for engineering and regulatory teams while maintaining full compliance and traceability to FDA expectations.
✅ SECTION 1: SA - Architecture Views
Overview: The FDA requires four architecture views to demonstrate how cybersecurity is designed into the system. These diagrams establish context for threat modeling, risk analysis, and control selection. They must include explanatory text and reflect real-world use and deployment.
Have you created a Global System View that shows all internal components, external systems, trust boundaries, and data flows — with a written narrative and version control?
1.2 Multi-Patient Harm View
Have you documented how your system prevents cascading impact across multiple patients using a Multi-Patient Harm View diagram and supporting explanation?
1.2 Multi-Patient Harm View
1.3 Updateability/Patchability View
Have you modeled how updates are securely delivered, authenticated, and applied — with visual diagrams and narrative stored in version control?
1.3 Updateability/Patchability View
1.4 Security Use Case Views
Have you created views that illustrate how key security objectives (e.g., authentication, confidentiality) are implemented across critical workflows — and included them in your design documentation?
1.4 Security Use Case Views
1.5 Version Control and Traceability
Are all architecture views version-controlled and traceable to your threat model and control documentation (e.g., risk table, SRTM, or control matrix)?
1.5 Version Control and Traceability
🛡️ SECTION 2. RM - Threat Modeling
Overview: Threat modeling is the foundation of FDA’s cybersecurity risk management process. It enables identification of threats, attack vectors, and system vulnerabilities based on your actual architecture and use context. FDA expects manufacturers to document and maintain threat models throughout the product lifecycle, using them to inform risk assessment, control selection, and testing strategies.
2.1 Use of Architecture Views
Have you used your architecture views (Global System, Use Case, etc.) to identify data flows, assets, boundaries, and workflows for threat modeling?
2.1 Use of Architecture Views
2.2 Threat Modeling Methodology
Have you selected and applied a structured threat modeling methodology (e.g., STRIDE, LINDDUN, or similar) to guide analysis?
2.2 Threat Modeling Methodology
2.3 Asset & Workflow Definition
Have you defined key system assets (e.g., credentials, patient data, update mechanisms) and workflows (e.g., login, OTA updates) as part of the threat model?
2.3 Asset & Workflow Definition
2.4 Attack Surfaces & Entry Points
Have you enumerated threats against system assets and workflows — including entry points, trust boundaries, and potential attack surfaces — and evaluated their impact, likelihood, preconditions, and mitigation strategies?
2.4 Attack Surfaces & Entry Points
2.5 Threat Enumeration & Evaluation
Have you identified and evaluated specific threats tied to workflows, assets, or interfaces (including impact, likelihood, and assumptions)?
2.5 Threat Enumeration & Evaluation
2.6 Assumptions, Roles & Constraints
Have you recorded key threat modeling assumptions, attacker roles (e.g., internal, external), and known system constraints that influence threat exposure?
2.6 Assumptions, Roles & Constraints
Does your threat model include third-party software, APIs, libraries, and external interfaces (e.g., BLE, REST, HL7-FHIR)?
2.8 Version Control & Maintenance
Is your threat model stored in version control, regularly updated, and linked to risk assessments and control decisions?
2.8 Version Control & Maintenance
🧮 SECTION 3. RM: Cybersecurity Risk Assessment & Interoperability
Overview: Risk assessment transforms your threat model into a prioritized, traceable analysis of cybersecurity risk. The FDA expects you to evaluate each threat based on likelihood, impact and residual risk, and to demonstrate how existing or proposed mitigations reduce exposure. This process must be documented and maintained across the Total Product Lifecycle (TPLC), especially in interoperable, cloud or multi-component systems.
3.1 Threat Impact & Likelihood Evaluation
Have you evaluated each identified threat for potential impact, likelihood of exploitation, and preconditions — and documented the rationale behind risk scoring?
3.1 Threat Impact & Likelihood Evaluation
3.2 Interoperability Risk Identification
Have you flagged threats arising from interoperable components (e.g., HL7-FHIR, REST APIs, BLE, cloud endpoints) and documented untrusted connections?
3.2 Interoperability Risk Identification
3.3 Mitigation Documentation
Have you documented existing mitigations or controls for each threat and described how they reduce risk?
3.3 Mitigation Documentation
3.4 Residual Risk Evaluation
Have you evaluated and recorded the residual risk for each threat — classifying it as acceptable, unacceptable, or requiring further mitigation?
3.4 Residual Risk Evaluation
3.5 Security Risk Table Documentation
Have you created a structured Security Risk Table (or SRTM) that includes threat ID, affected components, impact, likelihood, risk score, mitigation, and residual risk — with interoperability tags where applicable?
3.5 Security Risk Table Documentation
3.6 Traceability and Testing Controls
Have you linked threats and mitigations to test plans, verification procedures, or security controls to support validation?
3.6 Traceability and Testing Controls
🧾 SECTION 4. SA: Implementation of Required Security Controls
Overview: This section addresses the implementation of specific cybersecurity controls that mitigate threats identified during risk analysis. FDA expects manufacturers to demonstrate that security is built into the product through appropriate, risk-based selection and application of controls. These should align with the five security objectives and cover areas such as authentication, cryptography, logging, integrity, resiliency, and updateability. Controls must be mapped to the threat model, clearly documented, tested, and justified.
4.1 Threat-to-Control Mapping
Have you reviewed your threat model and risk table to identify which threats require mitigation via technical controls — and mapped them accordingly?
4.1 Threat-to-Control Mapping
4.2 Control Selection Based on FDA Guidance
Have you selected security controls aligned with FDA Appendix 1 and relevant frameworks (e.g., OWASP ASVS, NIST 800-53, IEC standards), covering areas like authentication, authorization, cryptography, logging, integrity, resiliency, and updateability?
4.2 Control Selection Based on FDA Guidance
4.3 Documentation of Threat–Control–Component–Test Mapping
Have you documented how each control maps to a specific threat, system component or workflow, and corresponding test case — in a traceable format like a control matrix or requirements table?
4.3 Documentation of Threat–Control–Component–Test Mapping
4.4 Residual Risk and Compensating Controls
Have you identified threats that remain unmitigated and documented compensating controls or rationale for accepting residual risk?
4.4 Residual Risk and Compensating Controls
4.5 Assumptions and Design Constraints
Have you documented assumptions and constraints (e.g., platform limits, legacy dependencies) that affect security control selection or implementation?
4.5 Assumptions and Design Constraints
4.6 Architecture Diagram Integration
Have you updated your architecture diagrams to reflect security control placement (e.g., auth flows, encryption paths, isolation zones), and ensured consistency with implementation?
4.6 Architecture Diagram Integration
4.7 Control Traceability Format
Have you stored control decisions in a structured and version-controlled format such as a Security Requirements Traceability Matrix (SRTM), Control Mapping Sheet, or equivalent — with links to risks, components, and validations?
4.7 Control Traceability Format
🧾 SECTION 5. CM: SBOM & Supply Chain Transparency
Overview: The FDA now expects device manufacturers to provide a Software Bill of Materials (SBOM) detailing all third-party, open-source, and proprietary components. This includes their version, supplier, known vulnerabilities, and support status. A complete SBOM is foundational for identifying inherited risks, evaluating software provenance, and responding to vulnerabilities throughout the device lifecycle. It must be machine-readable, maintained over time, and linked to testing, patching, and disclosure processes.
Do you generate a complete Software Bill of Materials (SBOM) that includes all third-party components — including name, version, supplier, hash, license, download URL, and end-of-support date (if available)?
Do you generate the SBOM in a standardized format such as SPDX, CycloneDX, or SWID?
Do you the SBOM as part of your , for every release?
5.4 Vulnerability Tracking
Do you track known vulnerabilities in third-party components using a software composition analysis (SCA) tool or CVE/NVD feeds?
5.4 Vulnerability Tracking
5.5 High-Risk Component Logging
Do you tag and track third-party components with HIGH/Critical vulnerabilities or End-of-Life (EOL) status in a vulnerability log?
5.5 High-Risk Component Logging
5.6 Risk Model Integration
Are third-party risks explicitly integrated into your Threat Model and Security Risk Table?
5.6 Risk Model Integration
5.7 Control Traceability Format
Have you stored control decisions in a structured and version-controlled format such as a Security Requirements Traceability Matrix (SRTM), Control Mapping Sheet, or equivalent — with links to risks, components, and validations?
5.7 Control Traceability Format
🧪 SECTION 6: Cybersecurity Testing (PT)
Overview: The FDA expects manufacturers to provide structured evidence that implemented security controls mitigate known threats and vulnerabilities. This includes verifying control behavior, testing threat mitigations, scanning for known issues (e.g., CVEs), and conducting independent penetration testing. All activities must be traceable to your security requirements, threat model, and architecture — and collectively support your residual risk justification and premarket submission.
6.1 Requirements Verification Coverage
Do you verify that each implemented control fulfills its associated security requirement (e.g., authentication, encryption, update flow), and document the result?
6.1 Requirements Verification Coverage
Do you use a Security Requirements Traceability Matrix (SRTM) to track coverage of your implemented security controls?
6.3 Threat Scenario Testing
Do you simulate attack scenarios from your threat model to test the effectiveness of your mitigations?
6.3 Threat Scenario Testing
6.4 Residual Risk Logging
Do you log failed or partially satisfied controls along with an assessment of their residual risk?
6.4 Residual Risk Logging
Do you perform Static Application Security Testing (SAST) to detect insecure code patterns and logic flaws?
6.6 Dynamic or Fuzz Testing
Do you perform Dynamic Application Security Testing (DAST) or fuzzing on exposed interfaces such as APIs, USB, or BLE?
6.6 Dynamic or Fuzz Testing
6.7 Software Composition Analysis (SCA)
Do you use your SBOM in conjunction with SCA tools to identify vulnerabilities in third-party components?
6.7 Software Composition Analysis (SCA)
6.8 Penetration Test Scope
Do you define a structured scope and methodology for your penetration tests, tied to your system architecture and threat model?
6.8 Penetration Test Scope
Are penetration tests conducted by an independent team, separated from the development team?
6.10 Multi-Step Exploit Simulation
Do your pentests include multi-step or chained attack scenarios (e.g., chaining auth bypass into privilege escalation)?
6.10 Multi-Step Exploit Simulation
🧮 SECTION 7: Security Assessment of Unresolved Anomalies (RM)
Overview: The FDA requires manufacturers to document and assess any unresolved anomalies — security-relevant test failures, incomplete mitigations, or unpatched vulnerabilities — that persist after design and testing. Each anomaly must be evaluated for its potential impact, linked to the threat model and risk table, and accompanied by a clear rationale if it remains unmitigated. This is critical to ensuring residual risks are transparent, justified, and traceable, especially for premarket review.
7.1 Logging of Unresolved Anomalies
Do you maintain a structured log of unresolved anomalies, including failed, skipped, or partially successful security tests?
7.1 Logging of Unresolved Anomalies
7.2 Anomaly Metadata and Disposition
For each unresolved anomaly, do you document its description, affected component, severity, and whether it was mitigated, accepted as residual risk, or deferred?
7.2 Anomaly Metadata and Disposition
7.3 Risk Table Integration
Do you update your Security Risk Table to reflect unresolved anomalies and include associated residual risk justifications?
7.3 Risk Table Integration
7.4 Justification for Non-Remediation
Do you provide clear, written justifications for not fixing specific security issues, such as compensating controls, low-risk categorization, or remediation infeasibility?
7.4 Justification for Non-Remediation
🔄 SECTION 8: TPLC Security Risk Management (RM)
Overview: Cybersecurity is not a one-time effort. FDA expects manufacturers to maintain an active security risk management process across the entire product lifecycle, including postmarket monitoring, vulnerability triage, patching, documentation updates, and threat model revisions. This ensures the device remains secure as threats evolve and new vulnerabilities emerge. The TPLC process should be structured, measurable, and traceable, and aligned with both premarket plans and postmarket expectations.
Do you triage newly discovered vulnerabilities within defined SLAs and log metadata such as source, affected components, CVSS score, and response action?
8.2 Patch Deployment and SLA Tracking
Do you have a defined patching process with SLAs and infrastructure to deliver, track, and confirm security updates?
8.2 Patch Deployment and SLA Tracking
8.3 Threat Model and Risk Register Maintenance
Do you update your threat model and security risk register after major changes such as new releases, vulnerability disclosures, or architectural updates?
8.3 Threat Model and Risk Register Maintenance
8.4 Security Metrics and SPDF Review
Do you collect metrics such as time-to-patch, time-to-deploy, or % of vulnerabilities patched on time, and use them to improve your SPDF security processes?
8.4 Security Metrics and SPDF Review
8.5 Coordinated Vulnerability Disclosure
Do you operate a Coordinated Vulnerability Disclosure (CVD) process with public intake and a defined triage/response workflow?
8.5 Coordinated Vulnerability Disclosure