Privacy & Cybersecurity #65
Germany Data Act Implementation | ICO ADM Guidance | Poland AI Act Rollout | California AI Procurement Order | Utah Age Verification Law | NY AI Transparency Law | FCC Risky Routers List
April 5, 2026
🇩🇪 Germany Adopts Data Act Implementation Law
On 26 March 2026, the German Bundestag adopted the Data Act Implementation Act (DADG), establishing the national enforcement framework required under Regulation (EU) 2023/2854 (Data Act). The law operationalizes enforcement, supervision, and sanctions at the national level.
The DADG focuses on:
designation of competent authorities;
procedural rules for complaints and dispute resolution;
investigative and enforcement powers;
sanctions for non-compliance.
The Federal Network Agency (BNetzA) is designated as the central competent authority responsible for enforcing the Data Act in Germany. Its role includes:
acting as the primary contact point for complaints;
coordinating with sectoral regulators and data protection authorities;
reviewing data access requests, including those from public bodies;
conducting investigations and issuing corrective measures.
The enforcement approach is staged. Authorities are expected to first issue remedial requests and allow a reasonable period for compliance before imposing sanctions.
The DADG introduces a tiered administrative fines regime for breaches of Data Act obligations. Key thresholds include:
up to €5 million or 2% of global turnover for serious infringements (e.g., misuse of data by gatekeepers);
up to €500,000 for violations such as failure to provide data or unlawful data use;
up to €100,000 for lower-level infringements, including failures related to cloud switching or information duties.
The law also amends German copyright law, clarifying that database rights do not apply where data is generated by connected products covered by the Data Act.
🇬🇧 ICO Updates ADM Guidance
On 31 March 2026, the UK Information Commissioner’s Office (ICO) published draft updated guidance on automated decision-making (ADM), including profiling, reflecting amendments introduced by the Data (Use and Access) Act 2025 (DUAA). The consultation is open to organizations deploying or planning to deploy ADM systems, as well as broader stakeholders.
The update aligns UK GDPR provisions with the newly introduced Articles 22A–22D, which refine the legal regime for solely automated decisions with legal or similarly significant effects.
The guidance confirms that ADM rules apply where three cumulative elements are present:
a decision about an individual;
producing legal or similarly significant effects; and
made solely by automated processing, without meaningful human involvement.
The guidance also clarifies that not all AI use constitutes ADM. Systems used for support (e.g. summarization tools) fall outside scope unless they materially influence outcomes affecting individuals.
Organizations may rely on standard UK GDPR lawful bases (notably consent, contract, public task, or legitimate interests), but:
“Recognized legitimate interests” cannot be used for ADM, even though they apply in other contexts.
ADM involving special category data is prohibited unless strict conditions apply, including explicit consent or a combination of contract/legal authorization and substantial public interest.
The guidance reinforces that “necessity” must be objectively assessed. Efficiency or scalability alone is insufficient to justify ADM.
A central addition is a detailed articulation of mandatory safeguards under Article 22C UK GDPR. These include:
provision of decision-specific explanations, not generic system descriptions;
the right to make representations;
access to meaningful human intervention; and
the ability to contest decisions.
The ICO stresses that safeguards must be operationalized systematically. Ad hoc or discretionary application is not sufficient.
The guidance also clarifies the distinction between human involvement (preventing a decision from being “solely automated”); and human intervention (a post-decision safeguard triggered by the individual).
The updated guidance expands expectations around transparency:
Individuals must receive meaningful information about logic, factors, and consequences of decisions.
Information must be decision-specific, enabling effective challenge.
Controllers should maintain audit trails showing key decision factors and alternative outcomes considered.
Timing obligations are also clarified. Information must be provided at data collection, upon access requests, and at the point ADM decisions are made.
The ICO reiterates that ADM is inherently high-risk. Most ADM deployments will require a Data Protection Impact Assessment (DPIA). Organizations must assess risks such as bias, discrimination, opacity, and error propagation, and systems should include mechanisms for bias detection, retraining, and auditability.
The consultation phase indicates further refinement is expected, particularly in areas such as explainability standards and interaction with forthcoming AI-specific regulation.
🇵🇱 Poland Moves to Implement AI Act Through National AI Supervision Framework
On 31 March 2026, the Polish Council of Ministers adopted a draft Act on Artificial Intelligence Systems (Project UC71), designed to operationalize and enforce the EU AI Act (Regulation (EU) 2024/1689) at the national level. The draft establishes a dedicated supervisory authority, introduces enforcement mechanisms, and defines procedures for complaints, certification, and sanctions.
The law is expected to enter into force 14 days after publication, subject to parliamentary approval.
The draft creates a new central authority — the Commission for the Development and Safety of Artificial Intelligence (KRiBSI) — which will act as Poland’s market surveillance authority under the AI Act.
KRiBSI will have a broad mandate, including:
conducting administrative proceedings and investigations;
issuing binding decisions and administrative fines;
assessing compliance of AI systems before and after market placement;
ordering restrictions or withdrawal of non-compliant AI systems.
The authority will also function as the national contact point and coordinate with EU institutions and other national regulators.
In parallel, the Polish Data Protection Authority (UODO) will play a specific role in supervising high-risk AI systems in sensitive domains such as law enforcement and justice.
The draft introduces a formal complaint mechanism allowing individuals to challenge AI systems that may violate legal requirements, including prohibited practices under Article 5 of the AI Act.
Complaints will result in administrative decisions subject to judicial review by the Warsaw Court of Competition and Consumer Protection.
The law enables enforcement of the AI Act’s prohibition regime, including:
bans on AI systems that pose unacceptable risks to fundamental rights;
administrative fines for violations of these prohibitions;
enforcement procedures allowing ex officio investigations by the authority.
The framework introduces mechanisms for:
conformity assessment and certification of AI systems;
supervision of high-risk AI deployment;
regulatory sandboxes to support testing and development of AI systems under supervision.
The Commission will also issue guidance, publish best practices, and provide individual opinions to businesses on compliance questions.
🇺🇸 California Introduces AI Procurement Safeguards Through Executive Order N-5-26
On 30 March 2026, Governor Gavin Newsom issued Executive Order N-5-26 establishing a new framework for the procurement and use of artificial intelligence by California state agencies. The order does not immediately impose binding obligations on private companies but initiates a structured process to introduce certification, risk screening, and contractual safeguards for AI vendors seeking to contract with the state.
The Order directs key agencies, including the Department of General Services (DGS) and the California Department of Technology (CDT), to submit recommendations within 120 days on new procurement requirements. These are expected to include certification mechanisms requiring vendors to attest to and explain their governance frameworks, particularly in relation to misuse risks, bias mitigation, and protection of civil rights.
The proposed procurement model signals a shift toward pre-contractual accountability for AI providers. Companies seeking to do business with California may be required to demonstrate:
Controls preventing the exploitation or distribution of illegal content;
Governance mechanisms to detect and mitigate harmful bias in AI systems;
Safeguards against violations of civil rights, including unlawful discrimination, surveillance, or interference with fundamental freedoms.
In parallel, the Order introduces additional governance elements:
A review mechanism for supply chain risks linked to federal designations of vendors;
Potential exclusion of vendors found to have undermined privacy or civil liberties;
Development of standardized contractual provisions addressing responsible AI use and data protection;
A state-wide data minimization toolkit and procurement checklists for high-risk data processing contexts.
The Order also mandates the development of watermarking guidance for AI-generated or manipulated content, aligning with existing California statutory requirements.
The Order is effective from 30 March 2026. Within 120 days agencies must submit recommendations on certification, procurement reforms, and AI governance standards.
🇺🇸 Utah Expands Online Age Verification and Digital Identity Framework
In March 2026, Utah adopted SB 73 (Online Age Verification Amendments), introducing a combined regime of mandatory age verification, new tax obligations for online content providers, and enhanced enforcement powers for the Division of Consumer Protection. The law takes effect primarily on May 6, 2026, with certain tax provisions effective October 1, 2026.
At the same time, Utah continues to develop its broader State-Endorsed Digital Identity (SEDI) framework, which introduces novel privacy concepts such as a statutory “duty of loyalty” for digital identity ecosystem participants.
SB 73 targets commercial entities that publish or distribute material harmful to minors online and introduces three main layers of obligations:
1) Mandatory age verification. Entities must implement “reasonable age verification methods” before allowing access to restricted content. These may include:
government-issued digital identification;
third-party verification services;
other commercially reasonable verification mechanisms.
Failure to implement such measures creates direct civil liability, including damages and legal costs where minors access restricted content. The law also introduces a prohibition on retaining identifying data after verification, and restrictions on facilitating circumvention (e.g., VPN guidance).
2) Registration, monitoring, and enforcement. Covered entities must:
notify the Division of Consumer Protection;
pay an annual notification fee (USD 500);
submit to audits and investigations.
Failure to notify may result in daily administrative penalties (USD 1,000 per day).
The Division is granted broad enforcement powers, including audits and investigations, administrative fines, court actions, injunctions, and disgorgement.
3) New tax regime linked to content and compliance. The law introduces two tax layers:
a 7% tax on gross receipts from content deemed harmful to minors (state-based nexus triggers);
a 2% excise tax on covered digital transactions (effective October 2026).
Tax revenues are earmarked for teen mental health programs and enforcement activities, reflecting a policy link between online harms and public health funding.
SB 73 operates alongside Utah’s broader SEDI initiative, which is designed to support secure digital identity use cases, including age verification. SEDI introduces several privacy-oriented design features:
prohibition of “phone home” tracking architectures;
selective disclosure of identity attributes;
purpose limitation and consent requirements;
a statutory “duty of loyalty” requiring ecosystem actors to act in the individual’s best interests.
The law prohibits retention of identifying data post-verification, yet encourages use of identity-based verification methods and operates in parallel with a digital identity infrastructure.
The law applies where content is produced, sold, or otherwise “based in” Utah.
The Division of Consumer Protection gains investigative authority, audit powers and penalty mechanisms.
🇺🇸 New York Enacts AI Transparency and Safety Framework for Frontier Models
New York has adopted a new regulatory framework governing developers of advanced artificial intelligence systems, replacing earlier provisions and introducing structured transparency and safety obligations. Senate Bill S8828, signed by the Governor, amends the General Business Law to establish a dedicated regime for “frontier” AI models, with a focus on disclosure, incident reporting, and regulatory oversight.
The law repeals elements of the prior 2025 framework and introduces a more formalized system designed to standardize how developers document and communicate the capabilities, risks, and impacts of their models. It also mandates the creation of an oversight function responsible for monitoring compliance with transparency and reporting obligations.
The legislation reflects relies on:
Transparency obligations: Developers of frontier AI models must provide structured information regarding system capabilities, limitations, and potential impacts.
Incident reporting mechanisms: The law introduces formal systems to capture and analyze post-deployment incidents.
Regulatory oversight: A dedicated office will supervise developer compliance.
🇺🇸 FCC Adds Foreign-Made Consumer Routers to Covered List
On 23 March 2026, the Federal Communications Commission (FCC) updated its Covered List to include all consumer-grade routers produced in foreign countries, following a national security determination by U.S. Executive Branch agencies. The decision effectively blocks authorization of new models of such devices for importation, marketing, or sale in the United States.
The measure is based on findings that foreign-manufactured routers present “unacceptable risks” to U.S. national security, including supply chain vulnerabilities and the potential for exploitation in cyber operations targeting critical infrastructure and private users.
The update operates through the FCC’s equipment authorization regime under the Secure and Trusted Communications Networks Act:
New consumer router models produced abroad are no longer eligible for FCC authorization, effectively excluding them from the U.S. market.
Existing, previously authorized models remain lawful to import, sell, and use.
End users are not required to replace currently deployed devices.
An exception applies where the Department of Defense or Department of Homeland Security grants “Conditional Approval,” allowing certain devices to proceed through the authorization process.
***
Direct your questions to groundcontrol@kepler.consulting.
Until the next transmission, stay secure and steady on course. Ground Control, out.

