AI Browsers Are Changing the Game — But at What Cost?

Artificial intelligence is quietly reshaping how we browse the web. AI-enhanced browsers like Comet promise to transform research, knowledge work, and decision-making. They summarize complex pages, anticipate questions, and synthesize insights instantly. I joined a local AI group today for a discussion around current AI trends and saw a demo of Comet. For startups and lean teams, that kind of capability is revolutionary: delivering speed, cost savings, and a strategic edge.

But as with any disruptive technology, the question isn’t just what can it do, it’s what does it cost us in privacy, compliance, and control?

At CipherNorth, we often help leadership teams strike this balance. And when we dig deeper into Perplexity’s Comet Browser, especially for financial institutions and healthcare providers, the results were clear: the privacy tradeoffs can be significant.

The Impact

AI browsers can dramatically reduce friction in workflows by summarizing policies, extracting data, even automating parts of compliance documentation. For many startups that don’t handle regulated data, these tools are a legitimate productivity leap forward. The value is real.

But once your organization holds regulated or sensitive data such as credit card numbers, confidential client information, patient records, the equation changes.

The Privacy Reality: What Comet’s Own Policies Reveal

CipherNorth reviewed Comet’s Privacy Notice, Terms of Service, and related documents through a risk-management lens. Below are the key findings every compliance-minded organization should consider. And yes, we used ChatGPT to assist.

1. Your Browsing Data Becomes Training Data

(Comet Privacy Notice, Section 2)
Comet collects URLs, page content, and interaction data “to improve the product and recommend relevant content.” That means even legitimate business use viewing internal dashboards, client portals, or EHR systems could unintentionally expose confidential information to an external AI service. Or browsing your banking website?

Risk: Confidential business data, personal data, or regulated information may be transmitted outside the organization’s control.

2. Prompts and InputsAre Processed Externally

(Comet Privacy Notice, Section 2, Perplexity Privacy Policy, “Service Interaction Information”)
Each query or AI prompt may be processed by third-party AI models “to predict and complete search queries.” In a regulated environment, this could include sensitive data like account numbers, diagnoses, or client names entered into an AI prompt. It also appears as though this may indicate that Comet is sending your prompts to their third parties whom you have no relationship with.

Risk: Sensitive inputs, including PHI or PII, could be captured and reused for model improvement.

3. Technical Telemetry Still Flows

(Comet Privacy Notice, Section 2)
Even in Incognito Mode, Comet gathers device identifiers, IP addresses, and crash data. While this helps improve security, it also means the organization cannot fully prevent external data transmission, a potential issue under regulatory frameworks and general security best practices.

Risk: Metadata may reveal user identity, location, or internal network structure.

4. Synced Accounts May Expose Credentials

(Comet Privacy Notice, Section 2, Perplexity Privacy Policy, “Linked Accounts”)
If users sign in or connect email and calendar services like Gmail, Comet may access contacts, appointments, or messages to deliver “contextual assistance.” For hospitals and banks, this is a critical red flag connecting internal communication systems could unintentionally expose sensitive or regulated content.

Risk: PHI, financial transactions, or client data could flow to external AI models.

5. Third-Party Sub-Processors and AI Models

(Sub-Processor Notice & Third-Party Models Documentation, Privacy Policy, “Service Providers and Partners”)
Comet explicitly relies on sub-processors and external AI models for analytics and feature delivery. This decentralization limits any organization’s ability to audit data flow, enforce data residency, or contractually control who handles sensitive information, a non-starter for HIPAA or regulatory compliance.

6. No Oversight for Third-Party Extensions

(Comet Terms of Service, Section 4)
Perplexity relies on multiple cloud providers and third-party AI models for analytics, storage, and functionality. This decentralization makes it impossible for a regulated enterprise to guarantee where or how data is processed or whether sub-processors meet compliance standards.

Risk: Uncontrolled data residency and vendor oversight gaps.

7. Cross-Border Data Transfers are Standard Practice

(Privacy Policy, “International Transfers”)
Perplexity acknowledges that data may be transferred or stored outside your jurisdiction and relies on Standard Contractual Clauses (SCCs) for compliance. While possibly acceptable under GDPR, this creates governance complexity for U.S. financial institutions or healthcare systems subject to local data localization or regulator review.

Risk: Data sovereignty violations or audit conflicts.

8. Regulatory Exposure

For hospitals, any exposure of PHI to an AI model without a Business Associate Agreement violates HIPAA and could be reportable. For banks, the uncontrolled transmission of customer data breaches GLBA, and FFIEC frameworks. This could also result in privacy reporting implications. In either case, the organization would face audit failure, potential fines, and reputational risk.

What the Data Processing Addendum (DPA) Reveals

Perplexity’s Data Processing Addendum (DPA) offers a closer look at how the company governs enterprise data use and where those safeguards stop. At first glance, the DPA reads like a reassuring layer of privacy controls: commitments to process data only on customer instruction, to delete data upon request, and to avoid using customer inputs to train large language models. However, a deeper read shows that these protections apply only to customers who have a formal business agreement with Perplexity not to general Comet users or free-tier accounts.

For startups experimenting with AI, this distinction may not matter. But for banks, hospitals, or regulated service providers, it’s a dividing line between controlled processing and uncontrolled data export.

Key takeaways from the DPA include:

  • Limited Scope: The DPA applies only to paying or enterprise customers, meaning most Comet users do not benefit from its terms.
    Risk: Regulated data entered in Comet’s free version lacks contractual privacy protections.

  • Processor Role & Instruction-Based Use: Perplexity defines itself as a processor acting on customer instructions and commits not to repurpose data for unrelated purposes.
    Positive: Data use is more restricted under enterprise contracts.
    Caveat: Outside these contracts, inputs and prompts may still be used for product improvement.

  • No Model Training Clause: The DPA prohibits using enterprise customer data for model training.
    Positive: This aligns with GDPR and data minimization principles.
    Risk: Only enterprise customers get this protection.

  • Third-Party Sub-Processor Oversight: Perplexity must notify enterprise customers of new sub-processors and allows limited objections.
    Positive: Offers visibility into vendor changes.
    Risk: The only remedy for objections is to terminate service which is not an option for critical systems.

  • Deletion and Audit Rights: Customers can request deletion or audit compliance, but audits are limited to once annually and must be pre-arranged.
    Risk: Limited auditability may not meet regulatory expectations under HIPAA, GLBA, or PCI DSS.

  • Cross-Border Transfers: Perplexity relies on EU Standard Contractual Clauses for international transfers.
    Risk: U.S. regulators increasingly scrutinize reliance on SCCs without strong local enforcement or data residency guarantees.

In short: the DPA shows that Perplexity is taking steps toward enterprise-grade governance, but it also underscores how far AI browsers have to go before they’re ready for highly regulated environments. The distinction between “consumer Comet” and “enterprise Comet under a DPA” is not a legal nuance it’s a compliance boundary. For regulated sectors, that boundary defines whether AI adoption is an innovation or an incident waiting to happen.

The Bottom Line

AI browsers like Comet are game-changing for unregulated businesses as they deliver faster insights, smarter context, and lower costs. But for enterprises or business managing sensitive or regulated data, privacy, compliance, and accountability must come first. Until AI browsers provide enterprise-grade controls, contractual data protection, model transparency, and localized processing, their use should be limited to non-production or sandboxed environments. It’s imperative that enterprises control the free/consumer use of technologies like this in their environments.

Category Key Concern Regulatory Impact
Data Exposure URLs, inputs, and browsing content may leave the enterprise boundary GLBA, HIPAA, PCI DSS
Third-Party Processing Lack of direct contracts with sub-processors Vendor management, SOC 2, ISO 27001
Telemetry & Metadata IPs and device IDs may reveal user or network data FFIEC Cybersecurity Assessment, NIST PR.AC
Cross-Border Transfers Reliance on SCCs may not meet U.S. data residency expectations OCC, GDPR, state privacy laws
Lack of BAA / DPA Coverage Free-tier users lack contractual privacy protections HIPAA violation, GLBA breach risk
User Error / Shadow IT Uncontrolled AI browser use in corporate environments Governance and insider risk exposure

Key Takeaway

AI browsers are a glimpse of the future. But in regulated sectors, that future must be designed with privacy by default and compliance by design or the cost of innovation could far outweigh the benefits. I would advise caution in using this in a regulated enterprise setting, and complete contractual review of the areas called out above before adoption. If it meets data protection and legal expectations it could be a game changer. It’s still imperative for organizations to have a framework for adopting GenAI in order to satisfy regulator and consumer scrutiny of their security posture.

About CipherNorth

CipherNorth is a boutique cybersecurity advisory firm based in Birmingham, Alabama, helping organizations turn security and privacy into strategic advantage. With deep experience across financial services, healthcare, and SaaS, CipherNorth guides boards, executives, and startups in building right-sized governance, risk, and compliance programs that enable growth without compromising trust.

Next
Next

Google’s AP2 and the Future of OpenBanking