Privacy & Cybersecurity #31
UK Data Act Enacted | Connecticut Expands Privacy Law | GDPR Enforcement Deal Reached | EU Tables Algorithmic Management Directive | CNIL Consults on AI and Legitimate Interest
🇬🇧 UK: Data (Use and Access) Act 2025 Receives Royal Assent 👑
On June 19th, 2025, the UK’s Data (Use and Access) Bill received Royal Assent and is now enacted as the Data (Use and Access) Act 2025. As previously covered in UK Passes Data Use and Access Bill, the Bill had undergone extensive debate before clearing both Houses of Parliament.
Key Legal Changes
Recognised Legitimate Interests. The Act introduces a list of recognised legitimate interests—such as national security, safeguarding children, or democratic engagement—for which organisations can rely on legitimate interest without needing to conduct the balancing test previously required under UK GDPR. This is intended to reduce administrative burdens in clearly justified cases.
Secondary Processing and Re-Use of Personal Data. The Act clarifies when further processing is compatible with the original purpose, providing a lawful basis for re-use in certain public interest contexts. For example, re-use is permitted in research, law enforcement, and national security without requiring renewed consent, provided compatibility conditions are met.
International Data Transfers – New 'Data Protection Test'. Replacing the EU-style adequacy mechanism, the Act introduces a “data protection test” for international transfers. A third-country regime will satisfy the test if its protections are not “materially lower” than those of UK law. This potentially opens up more flexible transfer options while still maintaining baseline safeguards.
Automated Decision-Making. The definition has been narrowed: it now only applies to decisions made entirely without meaningful human involvement. While key safeguards remain (e.g. right to human review and explanation), many automated systems may now fall outside the scope of these restrictions.
Data Subject Rights and DSARs. The Act reaffirms that DSARs must be handled within one month, with a possible two-month extension for complex cases. It introduces clearer grounds for refusing excessive requests, offering a more practical framework for organisations handling high volumes of data subject interactions.
ICO Powers and Regulatory Direction. The Act enhances the ICO’s investigatory and enforcement powers and introduces a mechanism for the Secretary of State to set strategic priorities for the ICO. While intended to align the regulator’s focus with national objectives, this change has raised questions about long-term independence.
Public Sector Data Use and Digital Identity. The Act establishes a legal framework for certified digital identity services and clarifies lawful grounds for data sharing between public bodies and private organizations, especially where it serves defined public interest goals (e.g. managing traffic, streamlining services). Participation remains voluntary but is now underpinned by clearer legal rules.
AI and Copyright Transparency. A proposed requirement for AI developers to publish details of training data was dropped. Instead, the Act commits the government to bring forward future legislation to address copyright and transparency issues in AI training.
❗Track upcoming ICO guidance and consultations at ICO website: ICO – The Data Use and Access Act 2025: What Does It Mean for Organisations?
🇺🇲 Connecticut Amends Privacy Law
On June 12, 2025, Connecticut enacted Public Act No. 25-113 (Substitute Senate Bill No. 1295), which amends the Connecticut Data Privacy Act (CTDPA). The amendments broaden the law’s scope, introduce new duties regarding minors’ data, and strengthen obligations around consumer health data and profiling. Most provisions will take effect on July 1, 2026.
Key Changes to the CTDPA
1. Expanded Applicability
The threshold for applicability has been lowered: the law now covers entities that control or process personal data of at least 35,000 consumers (down from 100,000), or process sensitive data, or offer personal data for sale—regardless of revenue thresholds.
2. Stronger Protections for Children’s Data
The law now prohibits processing minors’ data for targeted advertising, sale, or certain profiling activities unless strictly necessary and with appropriate consent:
Parental consent is required for children under 13.
Consent from the minor is required for those aged 13 to 17.
Controllers must avoid using system design features that increase or sustain minors’ use of online services without safeguards.
3. Data Protection and Profiling Impact Assessments
Controllers must conduct assessments not only for profiling or targeted advertising but also whenever a service is offered to minors or uses profiling with legal/significant effects.
These assessments must be retained for three years and provided to the Attorney General upon request.
4. Broadened Definitions of Sensitive Data
“Sensitive data” now includes neural data, government IDs, financial login credentials, and data on crime victim status.
“Consumer health data” explicitly includes gender-affirming care and reproductive or sexual health data.
5. Consumer Rights Enhancements
Consumers now have the right to:
Opt out of data sales and targeted ads via browser-based opt-out preference signals.
Obtain a list of third parties to whom their data was sold.
Question and challenge automated decisions, especially in housing contexts.
6. Notice and Consent Requirements
Controllers must clearly disclose whether they train large language models using personal data.
Privacy notices must be conspicuously posted and updated with the month and year of last revision.
Retroactive material changes to privacy practices require affirmative re-consent from users.
7. Service Provider (Processor) Contracts
Processor obligations are expanded to ensure transparency and accountability, with detailed requirements on confidentiality, subcontracting, and controller audits.
8. New Requirements for Online Services to Minors
Services directed to minors must use reasonable care to avoid “heightened risks of harm.”
This includes restrictions on data retention, profiling, and geolocation tracking.
Direct messaging to minors must default to a protective mode unless an existing connection exists.
Recommendations for Businesses
Reassess applicability: Businesses with under 100,000 but over 35,000 consumer data records may now fall within scope.
Prepare for minors’ data compliance: Review all services used by or directed to individuals under 18 and implement age-appropriate design and consent mechanisms.
Update privacy notices and opt-out mechanisms: Ensure clarity, accessibility, and inclusion of new required disclosures (e.g., LLM use, profiling).
Conduct assessments: Prepare for expanded Data Protection and Impact Assessments, particularly if your services involve children or automated decision-making.
Review vendor contracts: Ensure processors meet the updated statutory requirements.
🇪🇺 EU Reaches Provisional Agreement on Cross-Border GDPR Enforcement Reform
On 16 June 2025, the Council of the EU and the European Parliament reached a provisional agreement on a new regulation to improve the enforcement of the General Data Protection Regulation (GDPR) in cross-border cases. The agreement aims to streamline cooperation among national data protection authorities (DPAs) and accelerate complaint handling for individuals and organizations across the EU.
What Will Change
Common Rules for Admissibility. All cross-border complaints will be assessed using the same rules. This means that when a citizen or organization submits a GDPR complaint about cross-border processing, the requirements to accept the case will be the same in every Member State.
Rights for Complainants and Companies. The complainant will have the right to be heard if their complaint is rejected. Both the complainant and the company or organization under investigation will be informed of the preliminary findings and given a chance to respond before any final decision is made.
Time Limits for Investigations. The regulation introduces new deadlines for investigations:
A standard cross-border investigation should be completed within 15 months.
In complex cases, this can be extended by up to 12 months.
Simpler cases should be closed within 12 months.
Faster Complaint Resolution. Authorities will be able to resolve certain cases early—before going through the full cooperation procedure—if the issue has already been fixed and the complainant agrees.
Simpler Procedure for Straightforward Cases. For non-contentious matters, authorities may use a simpler process with fewer formal steps. This will help reduce unnecessary delays and paperwork.
More Transparency Between Authorities. The lead authority must share a summary of the main points of the case with other concerned authorities early in the process. This should help build agreement and prevent disputes later.
The agreement now needs to be formally approved by both the Council and the European Parliament. Once adopted, the regulation will apply across the EU.
🇫🇷 France: CNIL Publishes Summary of Public Consultation on Legitimate Interest and AI Development
On 10 June 2024, France’s data protection authority (CNIL) launched a public consultation on the use of the legitimate interest legal basis for developing AI systems. This consultation, part of CNIL’s broader effort to clarify GDPR compliance during AI development, focused on draft guidance regarding legitimate interest, data web scraping, open-source diffusion, and user transparency. The summary of contributions, published in June 2025, outlines CNIL’s evolving position and planned adjustments.
Key Themes and Outcomes
1. Legitimate Interest in AI Development. Many respondents called for greater clarity and concrete examples to make the guidance more operational. CNIL responded by:
Including new examples illustrating how legitimate interest can be applied in edge cases.
Reaffirming that no hierarchy exists between consent and legitimate interest, but that consent is required where legitimate interest cannot meet GDPR conditions or where other laws (e.g., the Digital Markets Act) so require.
2. Recognition of Commercial Interests. The CNIL confirmed that commercial interests can qualify as legitimate, provided they are lawful and processing is necessary and proportionate. However, it emphasized that such interests may weigh less heavily in balancing tests than those with public or scientific value.
3. Necessity and Data Minimization. CNIL acknowledged concerns that large-scale data use could be challenged under the necessity principle. It clarified that GDPR does not preclude using large data volumes for AI training—so long as such use is optimized and minimization principles are respected.
4. Balancing Test and Risk Analysis. The final guidance distinguishes between risks tied to AI development and those arising at deployment. While deployment risks (e.g. discrimination) must be anticipated, the full scope of such risks need not be addressed at the development stage. CNIL reiterated that legitimate interest assessments must be case-specific and can reference but not replace DPIAs or AI Act risk assessments.
5. Reasonable Expectations and Web Scraping. CNIL addressed the controversy around public web data scraping, particularly concerns that individuals cannot reasonably expect their online content to be used for AI training. The authority:
Maintained that scraping is not inherently unlawful, but its legitimacy depends on case-by-case analysis.
Clarified that public availability alone does not mean data can be scraped, especially for sensitive data.
Urged alignment with site owners’ restrictions (e.g. robots.txt) and recommended technical safeguards to exclude highly sensitive or intrusive sources.
6. Additional Safeguards
CNIL endorsed technical and organizational safeguards such as:
Use of synthetic or pseudonymized data where appropriate.
Pre-filtering data sources and excluding clearly non-compliant websites.
Strengthening transparency measures and facilitating the exercise of data subject rights (e.g. access, erasure).
Introducing license terms to limit AI model use for re-identification.
7. Postponed Open-Source Guidance. In light of diverging views, CNIL has decided to publish a separate case study on the open-source release of AI models, to provide more precise guidance aligned with GDPR requirements.
8. Registry Proposal Suspended. A proposed registry of entities engaging in AI-related data scraping has been put on hold due to stakeholder resistance, concerns over fragmentation, and limited expected participation.
Recommendations for Businesses
Carefully assess whether legitimate interest is a defensible legal basis, applying the full three-part test (purpose legitimacy, necessity, balancing of interests).
When scraping data, adhere to technical prohibitions (e.g. robots.txt), avoid sensitive or high-risk sources, and document risk mitigation measures.
Prepare to justify processing with a tailored balancing test and, where relevant, a DPIA.
🇩🇪 Germany: DSK Issues New Guidance on Privacy-Compliant AI Development
On 14 June 2025, Germany’s Data Protection Conference (Datenschutzkonferenz, DSK) published a new orientation guide titled Technische und organisatorische Maßnahmen bei der Entwicklung und beim Betrieb von KI-Systemen. The document outlines how to ensure compliance with the GDPR throughout the lifecycle of AI systems, from design through to deployment and continuous operation.
The guidance is structured around four lifecycle phases—design, development, deployment, and operation—and applies the Standard Data Protection Model (SDM) to align GDPR requirements with technical and organizational controls.
Legal Basis and Risk Management
Developers are generally considered controllers under the GDPR during the design and development phases.
Controllers must apply the principles of data protection by design and by default.
High-risk processing must be accompanied by a DPIA under Article 35 GDPR.
Responsibilities shift to end-user organizations during deployment and operation.
Technical and Organizational Measures by Phase
Design Phase: Focus on transparency, minimization, and early planning of safeguards.
Document the legal basis, purpose, and data sources (especially if data are scraped or publicly sourced).
Apply the SDM’s guarantees such as transparency, data minimization, and non-linking.
Consider using synthetic or anonymized data; avoid proxies that indirectly reveal sensitive traits.
Development Phase: Concerns training and validation of models.
Developers must prevent excessive or unnecessary processing by modularizing systems and limiting data exposure to only necessary components.
The integrity and representativeness of training data must be validated.
Ensure intervenability: for example, allowing retraining or removal of data in response to data subject requests.
Deployment Phase: Involves software distribution and configuration.
Apply privacy-friendly defaults and clearly document any model parameters, configuration settings, and how user data is processed.
Depending on model type, take care when distributing models with embedded training data (especially with non-parametric models).
Operational Phase: Focus on updates, retraining, monitoring, and incident response.
Ensure traceability and explainability of decisions, especially if legal effects are involved.
Regularly re-evaluate the system's output quality, and retrain if discriminatory effects emerge.
Implement logging, access controls, and mechanisms to allow human oversight and correction.
Notable Recommendations
Machine Unlearning: Developers are encouraged to explore technical capabilities to remove data from trained models where necessary (e.g., in response to a right to erasure).
Distributed Learning: Techniques such as federated learning are recommended to reduce the need to centralize personal data.
Backdoor Protection: If using pre-trained models, verify their integrity to guard against poisoned training data.
🇪🇺 EU Parliament Proposes Directive on Algorithmic Management in the Workplace
On 12 June 2025, the European Parliament’s Committee on Employment and Social Affairs adopted a draft report recommending a new Directive to regulate the use of algorithmic management (AM) and AI systems in the workplace. The proposed initiative aims to close regulatory gaps not addressed by existing instruments such as the AI Act and the GDPR, and to ensure fair, transparent, and human-centered deployment of digital management tools.
The proposed Directive would apply to all workers and employers in the EU, as well as to solo self-employed persons and their service contractors. It establishes minimum requirements concerning transparency, consultation, oversight, and occupational safety where AM tools are used.
Notable elements include:
Definition of Algorithmic Management: Covers systems that monitor, supervise, evaluate, or support decisions about work conditions, task allocation, remuneration, scheduling, and more, whether AI-based or not.
Right to Information: Employers and service procurers would be required to inform workers and solo self-employed persons—in writing and in accessible language—about the use of AM systems, including purposes, data collected, and the nature of decision-making processes.
Worker Consultation: New or significantly updated AM systems affecting work organization or remuneration must be subject to consultation with worker representatives, as per Directive 2002/14/EC.
Prohibited Practices: The use of AM systems to monitor off-duty behaviour, emotional state, or predict exercise of fundamental rights (e.g., union activity) would be banned.
Human Oversight: Critical decisions (e.g., hiring, firing, pay changes) cannot be made solely by algorithm. Workers would have the right to a human review and explanation.
Occupational Health and Safety: Employers must assess risks posed by AM systems and implement safeguards to prevent work-related stress, overwork, and psychosocial harm.
National Oversight: Labor inspectorates would be tasked with monitoring compliance, including bias detection, impact on worker health, and respect for working time regulations.
The report argues that existing laws such as the GDPR and the AI Act do not sufficiently cover the employment context or non-AI-based digital management tools. For example:
Article 88 GDPR on workplace data has seen limited implementation across Member States.
The AI Act focuses on market placement and product compliance, not employer–worker dynamics.
Solo self-employed workers—representing 10% of the EU workforce—lack protections despite being subject to similar automated decisions.
The Parliament calls on the Commission to submit a legislative proposal based on Articles 153(2)(b) and 16(2) TFEU. While the proposal is at an early stage, if adopted, it would establish harmonized EU-level standards for algorithmic management, providing legal certainty for businesses and safeguards for workers.
***
Direct your questions to groundcontrol@kepler.consulting.
Until the next transmission, stay secure and steady on course. Ground Control, out.