Privacy & Cybersecurity #24
Advanced Cryptography Guidance | Utah’s Generative AI Law | COPPA Amendments | TAKE IT DOWN Act | Connecticut Privacy Enforcement | Florida Investigates Roblox | Shopify Faces U.S. Privacy Lawsuit
🇬🇧 Advanced Cryptography: UK’s NCSC Releases Guidance on When to Use It — and When Not To
Don’t build your own cryptography
On April 28, 2025, the UK’s National Cyber Security Centre (NCSC) released a white paper titled Advanced Cryptography: Deciding When to Use It.
The NCSC defines Advanced Cryptography as a suite of cryptographic techniques that allow data to be processed while still encrypted, or otherwise handled in ways that go beyond traditional encryption and digital signatures. Examples include:
Homomorphic Encryption: Perform computations on encrypted data without decrypting it.
Multiparty Computation (MPC): Multiple parties compute an output together, without sharing their private inputs.
Zero-Knowledge Proofs (ZKPs): Prove knowledge or validity without revealing the underlying data.
Private Set Intersection (PSI): Find shared values in datasets without disclosing full datasets.
Private Information Retrieval (PIR): Query a database without revealing the query itself.
Attribute-Based Encryption (ABE): Restrict access based on specified user attributes.
The NCSC emphasizes that advanced cryptographic tools:
Are computationally expensive;
Are difficult to implement correctly;
Often lack standardized, certified implementations;
Can introduce new security risks, such as covert communication channels or undetectable data exfiltration.
The paper presents a structured approach for determining whether to use Advanced Cryptography:
Define the Problem Clearly: what needs to be computed, where, and by whom, what data is sensitive, and why?
Map the Threat Model: What are the risks, who are the adversaries, and what can they access?Are there dishonest or colluding participants?
Consider Traditional Alternatives First: Can existing cryptography (TLS, encrypted databases, PKI) solve the problem? Is a trusted intermediary an acceptable solution? If Necessary, Identify the Right Technique: Match your need (e.g., concealed queries, joint computation) to the right method. Verify whether any real-world tools exist and assess their performance, cost, and cryptographic assurances.
Account for Legal and Privacy Requirements: Consider UK GDPR implications, particularly the principles of data minimization and data protection by design. Note that Advanced Cryptography is not inherently GDPR-compliant; a data controller must still ensure lawful use and appropriate safeguards.
Key Risks to Understand
Immaturity of Tools: Many techniques are still experimental or lack widespread scrutiny.
Computational Overhead: Performance may not scale to large datasets.
Complex Deployment: May require new infrastructure or deep expertise.
Security Gaps: Some techniques (e.g., homomorphic encryption) do not provide integrity by default.
Operational Blind Spots: Encrypted traffic can bypass intrusion detection systems.
Recommendations for Businesses
Use traditional cryptography when possible
Run pilots before deployment. Test performance and functionality at small scale before large-scale rollout.
Engage legal and risk teams early.
Don’t build your own cryptography.
NCSC is explicit:
“In almost all cases, it is bad practice for users to design and/or implement their own cryptography; this applies to Advanced Cryptography even more than traditional cryptography because of the complexity of the algorithms. It also applies to writing your own application based on a cryptographic library that implements the Advanced Cryptography primitive operations, because subtle flaws in how they are used can lead to serious security weaknesses.”
🇺🇸 Utah’s New Laws on Generative AI
In March of this year, Utah Governor Cox signed several bills concerning the use of generative artificial intelligence. These measures—Senate Bills 332 and 226, and House Bill 452—establish new duties for businesses.
SB 332 simply delays the expiration of Utah’s 2024 Artificial Intelligence Policy Act (AIPA) from May 2025 to July 2027. In doing so, it ensures that the new rules adopted in SB 226 and HB 452 will remain in effect, and allows further refinement through time and experience.
SB 226 amends the AIPA by setting forth duties of disclosure when generative AI is used in sensitive contexts. The law applies where:
A supplier engages in a consumer transaction and uses AI to interact with the consumer; or
A person provides services in a regulated occupation (such as law, medicine, or finance), and uses AI in a way that might affect personal or legal decisions.
In such cases, the supplier must inform the individual that AI is being used if the individual clearly and directly asks. In regulated professions, disclosure is required if the AI use qualifies as a “high-risk interaction.”
Failure to disclose may result in fines of up to $2,500 per violation, imposed by the Utah Division of Consumer Protection.
HB 452 introduces a new set of rules specific to AI systems that simulate mental health care. If a chatbot gives the appearance of providing therapy or mental health advice, it must:
Clearly inform the user that it is not a human;
Avoid using user inputs to target advertisements;
Refrain from selling or sharing health data, with only narrow exceptions.
Recommendations for Businesses
Be prepared to disclose the use of AI upon request. In high-risk domains, do not wait to be asked. Disclose voluntarily and clearly.
Use plain, unambiguous language.
Where AI resembles a licensed human professional (particularly in therapy or medical fields), make it unmistakably clear that the interaction is with a machine.
Health-related input and personal information must not be shared or sold. Secure consent and maintain contractual control when integration with third parties is necessary.
AI systems in therapeutic settings must not use what a user says to shape the ads they see.
🇺🇲 FTC Finalizes Stricter COPPA Rule for 2025
On April 22, 2025, the Federal Trade Commission (FTC) published its final amendments to the Children’s Online Privacy Protection Rule (COPPA). The revised Rule, effective June 23, 2025, with a compliance deadline of April 22, 2026 for most provisions, redefines several key obligations and introduces a stricter approach to children’s data privacy.
Key Changes to the Rule
Expanded Definitions. The Rule introduces a new category: “mixed audience website or online service.” This term formalizes the treatment of platforms that are not exclusively child-directed but nevertheless attract a significant number of children. Operators in this category must now treat child users as such, even if the site is not solely aimed at them.
The Rule also broadens the definition of “personal information” to include:
Mobile phone numbers under “online contact information”;
Government-issued identifiers (such as social security numbers);
Biometric identifiers, including facial and voice data used for recognition purposes.
Written Security Programs Required. Operators must now maintain a documented, risk-based security program that protects children’s personal data. The program may be integrated into a general information security policy, provided it meets all new COPPA standards. The safeguards must account for the operator’s size, complexity, scope of activities, and the sensitivity of data collected.
Strict Data Retention and Deletion Rules. The revised § 312.10 codifies a long-standing enforcement stance: indefinite retention of children’s personal data is prohibited. Operators must now adopt a written data retention policy, publicly disclosed, which details:
The purpose of data collection;
The business need for retention;
A defined deletion timeframe;
Reasonable safeguards against unauthorized access during deletion.
Parental Consent Rules Strengthened. Several methods of obtaining verifiable parental consent have been clarified or revised:
The "email plus" method is retained but now treated as secondary;
What it means: The “email plus” method is one of the FTC’s older mechanisms for obtaining verifiable parental consent. It allows an operator to send an email to the parent, then use a secondary method (like a delayed confirmation email or another step) to verify that the parent indeed gave consent. This method has always been permitted only when the data collected is used for internal purposes only (e.g., account creation, personalization)—not for sharing with third parties.
The 2025 amendment keeps the “email plus” method, but:
It confirms its limitations—only for internal uses;
It places more emphasis on other, stronger consent methods;
It subtly discourages reliance on this method in high-risk or unclear scenarios.
Use of credit or debit cards no longer requires a monetary transaction;
Previously, one way to verify a parent’s identity and obtain consent was through a payment method—a credit or debit card. But this method required that the parent actually complete a monetary transaction (e.g., pay $1) to prove they were an adult. Now, the transaction is no longer required. The use of the card itself is enough, provided that the card is validated in a secure manner (e.g., using standard anti-fraud tools), and it's reasonable to believe the cardholder is the parent.
A new audio-file exception allows collection of a child’s voice in limited contexts without consent, provided the audio is promptly deleted and used solely to respond to the child’s request.
Expanded Requirements for Safe Harbor Programs. COPPA-approved Safe Harbor programs must now:
Publicly list member operators and their certified platforms;
Submit triennial capability reports;
Include more detailed complaint disclosures and operator oversight mechanisms.
What Are COPPA Safe Harbor Programs?
Under the Children’s Online Privacy Protection Act (COPPA), the Federal Trade Commission (FTC) allows industry groups or other qualified organizations to develop their own self-regulatory programs—these are known as Safe Harbor Programs. A business that follows the rules of an FTC-approved Safe Harbor Program is deemed to be in compliance with COPPA, and in most cases, the Safe Harbor Program—not the FTC—is responsible for monitoring and enforcement.
The idea behind Safe Harbor Programs is: Some industries—such as children’s gaming, educational tech, or social media—may benefit from having specialized rules and oversight that account for how they actually operate. Rather than apply a one-size-fits-all federal rule, COPPA allows these programs to tailor privacy rules to their members’ specific technologies, offer hands-on compliance support, reduce direct enforcement pressure from the FTC.
To gain approval, a Safe Harbor Program must prove that its rules meet or exceed COPPA’s protections, it has effective enforcement mechanisms, it provides transparency, complaint handling, and regular oversight.
The FTC’s new COPPA Rule strengthens the obligations of Safe Harbor Programs. Starting in 2025, they must publicly list all participating members and services, provide triennial capability reports to the FTC, maintain formal complaint procedures, show evidence of actual oversight, not just paperwork.
Recommendations for Businesses
Re-evaluate whether your service may now be considered a “mixed audience” platform.
Set specific, justified retention periods for child data. Publish this policy in your online notice.
Ensure your security controls explicitly address child data.
Document how you tailor your safeguards to the nature and sensitivity of such data.
Implement one or more FTC-recognized consent methods.
Train Your Team.
🇺🇲 TAKE IT DOWN Act: A Federal Response to Deepfake Exploitation and Nonconsensual Intimate Content
A bipartisan group of U.S. Senators has introduced the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act, known as the TAKE IT DOWN Act. This bill seeks to establish federal criminal and civil penalties for the publication of nonconsensual intimate images—including both authentic photos and digitally fabricated deepfakes.
Amending Section 223 of the Communications Act of 1934, the Act criminalizes two categories of unlawful publication:
A. Authentic Intimate Images
Applies when an image is shared without consent, and:
It was taken under circumstances where a person had a reasonable expectation of privacy;
It was not already public or commercial in nature;
Its publication was intended to cause harm or did, in fact, cause psychological, financial, or reputational harm.
B. Digital Forgeries (Deepfakes)
Applies to AI-generated or manipulated depictions indistinguishable from real images, even if the depicted event never occurred.
Criminal liability attaches when:
The subject did not consent;
The image was not voluntarily public;
The publication was harmful or malicious in nature.
Penalties:
Up to 2 years imprisonment for adult-related offenses;
Up to 3 years for offenses involving minors;
Additional penalties for threatening to release such content and mandatory restitution and forfeiture of profits or devices used in the offense.
The bill would also require covered online platforms (those hosting user-generated content) to:
Establish a takedown process allowing individuals to request the removal of intimate content published without consent;
Respond within 48 hours to valid requests;
Make “reasonable efforts” to remove identical copies;
Provide clear public notice of the removal process in accessible, plain language.
🇺🇲 NYDFS Cybersecurity Controls Now in Force — What Covered Entities Must Do Next
As of May 1, 2025, all Covered Entities regulated by the New York State Department of Financial Services (NYDFS) are now expected to be fully compliant with the latest round of cybersecurity requirements under the amended 23 NYCRR Part 500. These enhanced rules apply to financial services companies operating under NYDFS jurisdiction, including:
State-chartered banks and trust companies,
Licensed lenders and mortgage servicers,
Insurance companies authorized in New York,
Virtual currency businesses, and others.
Covered Entities must now have the following technical controls fully implemented:
Automated and manual vulnerability scanning procedures;
Strict access privilege limitations, including regular review and timely revocation of unnecessary access;
Comprehensive event logging to support real-time monitoring and incident response;
Secure password policies, especially where passwords are still used in conjunction with authentication systems.
Under NYDFS rules, a “Covered Entity” includes any financial institution or licensee supervised by NYDFS. Entities meeting both of the following criteria are additionally classified as Class A Companies, subject to stricter cybersecurity obligations:
Annual gross revenue of $1 billion or more (including affiliates), and
Workforce of 2,000 employees or more (including affiliates).
The next compliance milestone is November 1, 2025, when the following requirements take effect:
Multi-Factor Authentication (MFA) must be enabled for all users accessing information systems.
Cybersecurity awareness training must be provided annually to all personnel.
Asset inventory requirements will apply, requiring clear identification and tracking of all information systems components.Class A Companies must implement privileged access management, enhanced monitoring, and conduct independent cybersecurity audits.
🇪🇸 Spanish DPA Fines Fitness Chain for Unlawful Video Recordings
In a decision dated March 2025, the Agencia Española de Protección de Datos (AEPD) fined School Fitness Holiday & Franchising, S.L., the corporate owner of the Holiday Gym brand, for serious violations of the GDPR. The fines stemmed from its practices of recording and publishing client videos without valid consent, alongside additional failings related to data retention and the improper handling of its relationship with a subcontracted gym branch.
A gym member of Holiday Fit Tres Cantos submitted a formal complaint in 2023, alleging that staff were recording fitness classes without proper notice or consent. Despite repeated objections, the complainant claimed they were ignored and their image continued to be used, potentially for promotional purposes.
The company invoked contract terms to justify its actions—some of which referenced sweeping authorizations for global use of images “without limitation.” But the AEPD found these provisions both ambiguous and noncompliant with GDPR standards of consent.
The AEPD found three principal violations:
Lack of Valid Consent (Article 7 GDPR). The company asserted that consent had been obtained through contract clauses and verbal prompts before classes. However, upon review, no evidence supported that the complainant had been given a clear, specific, informed, and unambiguous choice. Key failings included:
No standalone opt-in for image use;
Clauses buried within broader contract language;
Conditional acceptance of services based on image consent (a clear GDPR violation);
Absence of proof for alleged verbal notices.
Unlawful Data Retention (Article 5.1.e GDPR). The company’s internal policy indicated that promotional images and videos would be kept for an “indefinite period.” GDPR requires that personal data be retained only as long as necessary for its purpose. Indefinite retention contradicts this foundational principle.
Improper Role Assignment (Article 28 GDPR). Though the gym operator (Holiday Fit Tres Cantos) was said to be a joint controller, the AEPD concluded that it was in practice acting as a processor on behalf of School Fitness, which alone decided why and how recordings were made and used. However, no data processing agreement was in place reflecting this true relationship, as required under GDPR.
The total sanction of €36,000 was reduced to €21,600 after School Fitness admitted liability and paid early, triggering a 40% reduction under Spain’s administrative procedures. In addition, the company has been ordered to implement specific measures within six months, including:
Establishing a valid consent mechanism;
Deleting any unlawfully obtained images;
Limiting retention to a lawful period;
Executing a proper data processing agreement with Holiday Fit Tres Cantos.
Recommendations for Businesses
Ensure that image, audio, or biometric processing consents are separate, explicit, and optional.
Never condition service delivery on unnecessary data collection.
Every processing activity must have a defined retention period. “Forever” is not an option.
Periodic audits of stored image and video files are essential.
Train Staff on Verbal and Visual Consent Protocols.
Avoid publishing identifiable images of clients without robust proof of consent.
Use stock imagery or masked visuals unless explicit, written consent is on file.
🇷🇴 Travel Agency Fined for GDPR Breaches After Facebook Leak
In April 2025, the Romanian National Supervisory Authority for Personal Data Processing (ANSPDCP) fined a travel agency, SC Travel Planner SRL, for serious violations of the General Data Protection Regulation (GDPR). The breaches stemmed from the public disclosure of customer data on Facebook during a promotional contest.
The agency discovered that Travel Planner had published a list of customers on its public Facebook page in connection with a raffle. The list included:
Full names of tourists,
Reservation codes,
Hotel or location of stay, and
Duration of trips.
No proper safeguards were in place. No attempt was made to mask, limit, or restrict this information. The data was available to any member of the public who accessed the page.
In the course of the investigation, further failings were uncovered:
The breach was never reported to the data protection authority, as required by GDPR Article 33.
The company failed to provide complete responses to data access requests from individuals.
Penalties Imposed:
€5,000 for failing to secure personal data (Article 32 GDPR).
€1,000 for failure to notify the breach (Article 33 GDPR).
Formal warning for non-compliant responses to data subject access requests (Articles 15 and 12).
Recommendations for Businesses
Avoid publishing names, contact details, or identifiers unless the individual has given clear, written, and informed consent; or a strong legal basis exists, and appropriate safeguards (e.g. anonymization or initials) are in place.
Report breach timely. Under GDPR Article 33, most personal data breaches must be reported to the supervisory authority within 72 hours.
Ensure you have internal procedures to detect and evaluate potential breaches, your team knows who decides when and how to report and you maintain a record of breaches.
Respond Properly to Access Requests.
Implement Technical and Organizational Safeguards - role-based access to personal data, policies against posting identifiable client information, staff training, secure deletion and retention protocols.
🇺🇲Shopify Faces Data Privacy Lawsuit in California After Court Revives Case
The U.S. Court of Appeals for the Ninth Circuit has ruled that Shopify Inc.—the Canadian e-commerce infrastructure provider—can be sued in California over alleged violations of privacy laws. The ruling revives a proposed class action, allowing the case to move forward in U.S. court. The Court did not decide whether Shopify broke the law—only that the company must now answer the claims in a California court.
The lawsuit, filed by Brandon Briskin, a California resident, claims that Shopify embedded tracking software in merchant checkout pages without disclosing this to users, including during a purchase Briskin made on his iPhone. Briskin alleges that Shopify collected personal and behavioral data from millions of users, including Californians, and profited from this by building detailed consumer profiles for use by other businesses. A lower court had dismissed the suit, accepting Shopify’s argument that it could not be sued in California. But the Ninth Circuit reversed that ruling.
The Ninth Circuit held that Shopify deliberately targeted the California market, both by embedding its software in stores used by California merchants and by collecting and monetizing data from California consumers. This satisfied the standard for personal jurisdiction.
The case will now proceed in the U.S. District Court for the Northern District of California, where Shopify will have the opportunity to respond to the substance of the claims.
🇺🇲 Florida Subpoenas Roblox in Child Data and Online Safety Probe
Florida Attorney General James Uthmeier has opened an investigation into Roblox Corporation, issuing a formal subpoena seeking documents related to the company’s handling of children’s privacy, content moderation, and commercial practices.
On April 15, 2025, the Florida Office of the Attorney General issued a Civil Investigative Demand to Roblox Corporation. The subpoena—signed by Attorney General Uthmeier—seeks detailed records concerning:
The collection, use, sharing, and sale of personal data from children;
Roblox’s age verification systems and content moderation practices;
Its parental controls, user reporting tools, and in-game monetization mechanics;
Third-party advertising, targeting of minors, and sale of digital goods;
Company policies, complaints, and communications related to harmful or sexually explicit content available on the platform.
The Attorney General's accompanying press release states that the goal is to determine whether Roblox has violated Florida’s Deceptive and Unfair Trade Practices Act (FDUTPA), especially in relation to children under 13.
🇺🇲 The Connecticut Data Privacy Act 2025 Enforcement Trends
Attorney General William Tong’s Office has released its Updated Enforcement Report under the Connecticut Data Privacy Act (CTDPA), which took effect on July 1, 2023. In 2024 alone, the Office received over 1,900 data breach notifications.
Key Enforcement Priorities
1. Broken or Misleading Privacy Notices. The Office found that many companies still fail to:
Clearly list Connecticut consumers’ rights,
Provide a conspicuous “opt out” link,
Acknowledge CTDPA’s full scope (some mention California only),
Avoid misleading or self-contradictory statements (e.g., saying “we don’t sell data” while disclosing it elsewhere).
2. Dark Patterns and Deceptive Cookie Banners. The report targets:
“Accept all cookies” buttons with no equal “reject all” option;
Confusing paths to opt-out (multiple clicks, small fonts, hidden links);
Defaults set to opt-in without consent.
3. Facial Recognition and Biometrics. The OAG is investigating retailers using facial recognition for loss prevention. It reminds businesses:
Consent is required for biometric data,
Processing must be limited to the stated purpose,
A data protection assessment (DPA) is mandatory.
4. Minors’ Data and Addictive Designs. Connecticut’s 2023 law (SB3) extends privacy protections to all minors under 18, going beyond federal COPPA. Key rules:
No targeted ads or profiling without opt-in consent,
No addictive design features to keep minors online longer,
No precise geolocation without clear, revocable consent.
Other Enforcement Focuses
Consumer Health Data: Telehealth firms transmitting sensitive info (e.g., to Meta or Google) are under investigation.
Connected Vehicles: Data collection by car manufacturers is being audited.
Genetic and Ancestry Data: Investigations into poor security and unauthorized use continue.
Cremation Ads After Chemotherapy: Use of data in sensitive contexts—even when not classified as “sensitive”—can cross ethical and legal lines.
Recommendations for Businesses
Review and Revise Privacy Notices
List CTDPA rights explicitly;
Include opt-out links for targeted ads and data sales;
Avoid contradictory language;
Reformat your notice for clarity and readability.
Eliminate Dark Patterns
“Accept all” must be paired with a visible “Reject all”;
No default opt-ins;
No misleading language like “By browsing, you agree…” without an actual choice.
Secure Consent for Biometric and Health Data
Get clear, revocable, opt-in consent;
Conduct Data Protection Assessments;
Store biometric data securely and only as long as necessary.
Prepare for Universal Opt-Out Signals (OOPS)
As of Jan 1, 2025, Global Privacy Control (GPC) must be honored;
Technical systems must detect and respond to opt-out signals;
If you process targeted ads or sell personal data, compliance is mandatory.
Don’t Delay Breach Notifications
Connecticut law requires notice within 60 days;
The Office expects notice soon after detection, not only after full investigation;
Expect penalties if delay is unjustified, especially if warned in the past.
***
Direct your questions to groundcontrol@kepler.consulting.
Until the next transmission, stay secure and steady on course. Ground Control, out.