NewsGuard Sues FTC Over Ad Market Control | Analysis by Brian Moineau

A ratings service says the FTC is trying to strangle it — and the First Amendment is now part of the fight

The headline reads like a legal thriller: a company that assigns "trust scores" to news websites has sued the Federal Trade Commission, accusing the agency of weaponizing regulatory power to cut it out of the advertising ecosystem. It's NewsGuard versus the FTC, fronted by Chairman Andrew Ferguson — and the dispute raises three big questions: who gets to police the media marketplace, when does regulation become censorship, and how much power do ad buyers and agencies hold over what counts as “acceptable” news?

Why this matters (hook)

  • Advertisers funnel billions of dollars through a handful of ad agencies. If those agencies can't or won't buy inventory adjacent to particular outlets, the outlets' survival and audiences are affected.
  • Independent evaluators like NewsGuard say they help brands avoid reputational risk and help readers assess reliability. Critics say these ratings can be subjective or politically skewed.
  • When a regulator uses merger remedies or investigations that have the effect of freezing a ratings company out of the market, the stakes shift from commercial competition to free-speech and due-process questions.

Quick takeaways

  • NewsGuard filed a lawsuit in early February 2026 alleging the FTC burdened it with sweeping document demands and inserted merger conditions that effectively bar major ad agencies from using its ratings. (Filed Feb. 6, 2026.) (washingtonpost.com)
  • The contested merger remedy arose in the Omnicom–Interpublic transaction; the FTC’s order reportedly prevents those ad holding companies from basing ad buys on “journalistic standards or ethics” set by third parties — language NewsGuard says was crafted to target it. (washingtonpost.com)
  • NewsGuard argues the FTC’s actions violate the First and Fourth Amendments and amount to government censorship of a private service. The FTC and some conservatives argue NewsGuard has a political slant and has inflicted commercial harm on certain outlets. (washingtonpost.com)

What NewsGuard does and why advertisers use it

NewsGuard, launched in 2018 by media veterans including Steven Brill and Gordon Crovitz, uses human journalists to score sites on nine transparency and credibility criteria and publishes a “nutrition label” explaining each score. Brands and agencies have used these ratings to reduce ad placement near sites they judge risky, and browser extensions surface those trust scores to consumers. NewsGuard emphasizes transparency in its methodology and publishes the criteria it applies. (newsguardtech.com)

Why advertisers care:

  • Brand safety concerns: running ads next to fraudulent, extreme, or disinformation-filled content can cause reputational damage.
  • Liability and client pressure: large advertisers increasingly demand oversight tools to demonstrate they’re avoiding harmful placements.
  • Centralized buying power: big holding companies and ad agencies set de facto industry norms for what’s acceptable.

The FTC’s actions that sparked the lawsuit

According to NewsGuard’s complaint and reporting by The Washington Post, two lines of FTC activity prompted the suit:

  • An extensive information demand: the FTC ordered broad disclosures of NewsGuard’s client lists, ratings deliberations, communications, and financials — an investigation NewsGuard says is so sweeping it chills its business and violates privacy and press protections. (washingtonpost.com)

  • A merger condition in Omnicom–Interpublic approval: the FTC’s order included language preventing the combined agency from directing ad buys based on “adherence to journalistic standards or ethics established or set by a third party.” NewsGuard argues that language functions as a ban on companies using its ratings, effectively blacklisting the service. Newsmax and other conservative outlets publicly urged the FTC to broaden the language, which NewsGuard says revealed intent. (washingtonpost.com)

NewsGuard’s legal team frames these moves as retaliation driven by political disagreement, pointing to prior public criticism of the company by now-FTC Chair Ferguson. The company has asked a federal court to block enforcement of the merger condition and the investigative demand. (mediapost.com)

The competing narratives

  • NewsGuard’s story: a neutral, transparent ratings firm is being targeted for its editorial judgments. The FTC is overreaching by using merger remedies and investigations to hobble a private business whose work touches on public discourse. That, NewsGuard says, raises free-speech and due-process problems. (newsguardtech.com)

  • The FTC and critics’ story: regulators and some conservative outlets argue NewsGuard exercises editorial power that has real commercial effects and that its judgments may be politically biased. From this angle, the FTC’s scrutiny is about market power and potential exclusionary conduct — not censorship per se. Public comments from outlets like Newsmax influenced how the merger language was revised, suggesting industry players saw the remedy as relevant. (washingtonpost.com)

Both sides point to market realities: when ratings influence ad placement, they affect revenue flows. The novel legal wrinkle is whether a regulator may lawfully condition a merger or investigate a small ratings firm in a way that some regard as singling out protected speech.

Broader implications

  • The case could reshape how third-party content evaluators operate in advertising markets. If agencies are barred from relying on such ratings, advertisers lose one tool for brand protection; if regulators are limited, they may be less able to police potential collusion or exclusionary tactics in ad buying.
  • There’s a constitutional debate at the center: does the First Amendment protect the editorial judgments of a private ratings firm from regulatory interference? Conversely, do regulators have the authority to step in when a ratings product materially affects market competition or harms specific outlets?
  • The dispute exposes how intertwined advertising, editorial judgments, and platform economics have become. A private score can effectively act like a traffic light for publishers; when government action changes who can see or use that traffic light, the ripple effects are political, commercial, and civic.

My take

This lawsuit sits at the intersection of market structure and speech. NewsGuard’s methodology is transparent and human-driven — that matters in an era of opaque algorithmic moderation — but its influence on advertisers gives its judgments real economic weight. Regulators worried about arbitrary exclusion in ad markets have a legitimate role; at the same time, wielding merger conditions or sweeping investigative powers in ways that single out a small player risks the appearance (and perhaps the reality) of viewpoint-based regulation.

The healthier path would be clearer rules and neutral standards for ad buyers and ratings services: transparent criteria (which NewsGuard publishes), robust appeals and correction processes for rated outlets, and merger remedies narrowly targeted at anticompetitive conduct rather than broad language that could be read as a blacklist. These guardrails would protect both market fairness and free expression.

Final thoughts

At stake is not only one company’s business but the architecture of trust in the information ecosystem. When ratings, advertisers, and regulators collide, the outcome will shape how audiences find reliable information and how publishers — of whatever stripe — survive. Courts will now have to weigh whether the FTC crossed a constitutional line or acted within its mandate to police markets. Either way, the case underscores that in today’s media economy, the line between commerce and speech is increasingly hard to draw.

Sources




Related update: We recently published an article that expands on this topic: read the latest post.


Related update: We recently published an article that expands on this topic: read the latest post.


Related update: We recently published an article that expands on this topic: read the latest post.


Related update: We recently published an article that expands on this topic: read the latest post.

TikTok Outages Fuel U.S. Trust Crisis | Analysis by Brian Moineau

When a Power Outage Looks Like Politics: TikTok’s U.S. Glitches and the Trust Test

A handful of spinning loading icons turned into a national conversation: were TikTok’s recent U.S. posting problems just a technical headache, or the first sign of politically motivated content suppression under new ownership? The short answer is messy — a weather-related power outage is the proximate cause TikTok and its data-center partner point to, but the timing and stakes make user suspicion inevitable. (investing.com)

Why people noticed — and why the timing matters

  • TikTok users across the U.S. reported failures to upload videos, sudden drops in views and engagement, delayed publishing, and content flagged as “Ineligible for Recommendation.” Those symptoms arrived within days of the formation of a new U.S. joint venture that moved much of TikTok’s operations and data oversight stateside. (techcrunch.com)
  • The company and Oracle (one of the new venture’s managing investors) say a weather-related power outage at a U.S. data center triggered cascading system failures that hampered posting and recommendation systems — and that they’re working to restore service. (investing.com)
  • But because the outage overlapped with politically sensitive events — and came right after the ownership change — many users assumed causation: new owners, new rules, and sudden suppression of certain content. That leap from correlation to accusation is understandable in a polarized media environment. (wired.com)

The technical explanation (in plain language)

  • Data centers host the servers that store content, run recommendation systems, and process uploads. When a power outage affects one, services can slow down, requests can time out, and queued operations (like surface-level recommendations) may be lost or delayed. (techcrunch.com)
  • Complex platforms typically have redundancy, but real-world outages—especially weather-related ones affecting regional power or networking—can produce “cascading” failures where multiple dependent systems degrade at once. That can look like targeted suppression: a video suddenly shows zero views, a post is routed into review, or search returns odd results. Those are plausible failure modes of infrastructure, not necessarily evidence of deliberate moderation. (techcrunch.com)

The political and trust dimensions

  • Ownership change matters. TikTok’s new U.S. joint venture — with Oracle, Silver Lake and MGX as managing investors and ByteDance retaining a minority stake — was explicitly framed as a national-security and data-protection fix. Because that shift was sold as protecting U.S. users’ data and content integrity, anything that looks like content interference becomes a high-suspicion event. (techcrunch.com)
  • Political actors amplified concerns. State officials and high-profile voices raised alarms about potential suppression of content critical of political figures or about sensitive events. That political amplification shapes user perception regardless of technical facts. (investing.com)
  • The reputational cost is asymmetric: one glitch can undo months (or years) of trust-building. Even if an outage is genuinely technical, the brand hit from a moment perceived as censorship lingers.

What platforms and users can learn from this

  • Operational transparency matters. Quick, clear explanations from both the platform and its infrastructure partners — with timelines and concrete remediation steps — reduce the space for speculation. TikTok posted updates about recovery progress and said engagement data remained safe while systems were restored. (techcrunch.com)
  • Technical resiliency should be framed as a trust metric. Redundancy, better failover testing, and public incident summaries help show that problems are infrastructural, not editorial.
  • Users want verifiable signals. Independent third-party status pages, reproducible outage telemetry (e.g., Cloudflare/DNS data), or audits of moderation logs (where privacy and law allow) are examples of credibility-building tools platforms can use. (cnbc.com)

What this doesn’t settle

  • An outage explanation doesn’t erase legitimate long-term worries about who controls recommendation algorithms, moderation policies, and data access. The ownership shift was built to address national-security concerns — but it also changes who sits at the control panel for the platform. That shift deserves continued scrutiny and independent oversight. (techcrunch.com)
  • Nor does it mean every future suppression claim is a false alarm. Cloud failures and malfeasance can both happen; the challenge is designing verification systems that shrink false positives and false negatives in public trust.

A few practical tips for creators and everyday users

  • If you see sudden drops in views or publishing issues, check official platform status channels first and watch for updates from platform infrastructure partners. (techcrunch.com)
  • Back up important content and diversify audiences across platforms — creators learned this lesson earlier in the TikTok ban saga and during past outages. (cnbc.com)
  • Hold platforms and new ownership structures accountable for transparency: ask for incident reports, moderation audits where possible, and clearer explanations about algorithm changes.

My take

Timing is everything. A power outage is an ordinary, solvable technical problem — but in the context of a freshly restructured, politically charged ownership story, ordinary problems become extraordinary trust tests. Platforms that want to keep their communities need to treat operational reliability and public trust as two sides of the same coin. Faster fixes matter, yes — but so do pre-committed transparency practices and independent verification so that the next outage doesn’t automatically become a geopolitical headline.

Sources




Related update: We recently published an article that expands on this topic: read the latest post.


Related update: We recently published an article that expands on this topic: read the latest post.


Related update: We recently published an article that expands on this topic: read the latest post.

Snap’s $400M AI Search Gambit Changes | Analysis by Brian Moineau

Snap’s $400M Bet on Perplexity: Why Snapchat Just Got a Lot More Curious

Snap’s announcement that Perplexity will pay $400 million to integrate its AI-powered search engine into Snapchat feels like one of those pivot moments you can almost hear in slow motion. The deal — a mix of cash and equity, rolling out early in 2026 — immediately lit a fuse under Snap’s stock and reframed the company’s AI ambitions from experiment to platform play. But beyond the market fireworks, this pact tells us something about the next phase of social apps: search and conversation are converging inside the apps people already use every day.

Quick snapshot

  • Perplexity will be integrated directly into Snapchat’s Chat interface, surfacing verifiable, conversational answers to user questions.
  • The $400 million payment is to Snap over one year (cash + equity) and revenue recognition is expected to start in 2026.
  • Snap will keep its own My AI chatbot; Perplexity will act as an “answer engine” available inside chat, with Perplexity controlling the response content.
  • The news came alongside stronger-than-expected Q3 results from Snap, and the stock jumped sharply on the announcement. (investor.snap.com)

Why this matters (and why investors cheered)

  • Distribution = growth for AI startups. Perplexity gains nearly a billion monthly users as a built-in capability inside Snapchat — a shortcut to scale that usually takes years (and huge marketing). That distribution is worth a lot in today’s attention economy. (techcrunch.com)
  • New revenue model for Snap. Instead of building and owning every AI layer, Snap is becoming a marketplace — a platform that offers high-quality third-party AI features and captures revenue for the placement. That’s a faster, less risky route to monetization than trying to train everything in-house. (investor.snap.com)
  • User behavior is changing. People prefer getting answers where they already spend time. Embedding conversational search inside chat reduces friction and keeps attention and ad dollars inside Snapchat instead of sending users off to the open web. (reuters.com)

The practical trade-offs and questions

  • Who controls the content? Snap says Perplexity will control its responses and that Perplexity won’t use those replies as ad inventory. That preserves a level of editorial and brand separation — but it also raises questions about moderation, factual accuracy, and how disputes will be handled when AI answers go wrong. (investor.snap.com)
  • Data and privacy. Snap has claimed user messages sent to Perplexity won’t be used to train the model, but users will still have messages routed to an external engine. Transparency about data flows and safeguards will be crucial for trust — especially for younger users and privacy-conscious markets. (investor.snap.com)
  • Economics vs. compute. Paying for AI placement is one thing; making the unit economics work long-term is another. Perplexity is effectively buying distribution today — but as usage scales, compute and moderation costs could balloon. Will revenue from the placement plus future monetization options offset those costs? Analysts flagged this as a watch item. (investing.com)

A competitive angle: Snap’s place among the AI arms race

Snap isn’t the only company stuffing AI into social. Meta, TikTok, X and others are all experimenting with conversational assistants, generative features, and AI-powered search. But Snap’s path is distinct:

  • Platform-first, partner-driven. Rather than bake everything into a proprietary stack, Snap is inviting specialized AI companies into its app as first-class partners. That could accelerate innovation and let Snap remain nimble.
  • Youthful audience, mobile-native context. Snapchat’s demographic — heavy on 13–34-year-olds — gives Perplexity a unique testbed for conversational search behaviors that other platforms may not replicate as cleanly. (investor.snap.com)

This approach could scale if Snap builds a robust ecosystem of AI partners (and if regulators or policy changes don’t intervene). Spiegel has signaled openness to further partnerships, hinting at a future in which different AI assistants sit alongside each other inside Snapchat for different tasks. (engadget.com)

Design and user experience implications

  • Contextual answers inside chat feel natural: asking a quick question in a conversation or while viewing content is low friction and meets users where they already are.
  • Verification and citations matter: Perplexity emphasizes “verifiable sources” and in-line citations. If executed well, that could distinguish Snapchat’s answers from hallucination-prone assistants and slow the growing distrust around AI outputs.
  • Product sequencing is key: early 2026 rollout gives Snap time to AB test placements, UI patterns, moderation flows, and ad/product hooks — which will determine whether this is sticky utility or a novelty. (investor.snap.com)

Possible risks and blind spots

  • Over-reliance on a single external provider. If Perplexity’s performance, reliability, or content decisions become problematic, Snapchat’s experience could suffer.
  • Regulatory heat. As governments scrutinize algorithmic systems, an in-app AI that serves tailored answers to young users could draw policy attention on age protections, misinformation, or advertising rules.
  • Cultural fit. Not all of Snap’s users will see value in an in-chat search engine. Adoption will depend on product framing, speed, trust signals, and how well the feature integrates into everyday use cases.

Snap’s playbook — what to watch next

  • Product signals: how prominently Perplexity is surfaced, whether it’s opt-in, and how Snap handles user controls and transparency.
  • Metrics: engagement lift, usage frequency per user, and whether this drives higher ad yields or subscription conversions for Snapchat+.
  • Ecosystem moves: announcements of other AI partners or a developer program that lets more AI agents plug into Snapchat.

My take

This deal is smart theater and pragmatic strategy rolled into one. For Perplexity, access to Snapchat’s massive, young, mobile-native audience is a growth shortcut. For Snap, the pact buys relevance in the AI moment without assuming all the execution risk. The real test will be execution: whether conversational search becomes a daily habit inside chats or remains a flashy add-on.

If Snap gets the UX right (speed, clear sourcing, and easy context switching) and keeps control over moderation and privacy, it could redefine how a generation asks questions — not by opening a browser but by typing into the same chats where they plan their weekends, gawk at memes, and swap streaks. That feels like a small change with outsized ripple effects.

Final thoughts

Big-dollar partnerships like this one are shorthand for a larger shift: apps are turning into ecosystems of specialized AI services, and the companies that win will be the ones that make those services feel native, trustworthy, and undeniably useful. Snap’s $400 million deal with Perplexity is a bold step in that direction — one that could either cement Snapchat as a go-to AI distribution channel or become another expensive experiment if the execution falters.

Sources




Related update: We recently published an article that expands on this topic: read the latest post.

Itch.io is the latest marketplace to crack down on adult games – TechCrunch | Analysis by Brian Moineau

Itch.io is the latest marketplace to crack down on adult games - TechCrunch | Analysis by Brian Moineau

Title: Navigating the Digital Playground: Itch.io's Crackdown on Adult Games

In a world where digital marketplaces are more crowded than ever, indie game platform Itch.io has made a bold move by "deindexing" adult and not-safe-for-work (NSFW) games from its browse and search pages. This decision is stirring the pot, reigniting discussions about content moderation, digital freedom, and the fine line between censorship and community standards.

The Move Towards Moderation

Itch.io, known for its eclectic array of indie games, has long been a haven for developers who want to express creativity without the constraints imposed by larger platforms like Steam or the Epic Games Store. The decision to deindex adult content is a significant shift for Itch.io, which has previously prided itself on its open marketplace approach. This change raises questions about what prompted the shift. Is it pressure from payment processors, a need to align with broader societal standards, or an attempt to curate a more family-friendly space?

A Broader Trend in Digital Spaces

Itch.io's decision is not happening in a vacuum. There's a broader trend of digital platforms reevaluating their content policies. For instance, OnlyFans made headlines in 2021 when it announced plans to ban sexually explicit content, only to reverse the decision after backlash from creators and users. Similarly, Tumblr's 2018 ban on adult content led to a significant drop in user engagement, illustrating the delicate balance platforms must maintain between content moderation and user satisfaction.

Implications for Indie Developers

For indie developers, Itch.io's move could mean a loss of visibility and revenue. Many developers rely on the platform's browsing features to reach new audiences. With adult games pushed to the fringes, developers may need to rethink their distribution strategies or find new platforms that welcome their content. This shift also invites a larger conversation about the spaces available for adult content in the digital marketplace. Is there a need for a new platform specifically tailored to adult indie games, or should existing platforms adapt to be more inclusive?

Connections to the Creative World

The conversation around content moderation isn't just limited to gaming. The art world, too, grapples with similar issues. Platforms like Instagram and Facebook have faced criticism for their content moderation policies, especially concerning artistic nudity. These platforms often walk a tightrope between adhering to community guidelines and respecting artistic expression. The parallels between these industries highlight a universal struggle in the digital age: finding the balance between creative freedom and community standards.

Final Thoughts

Itch.io's decision to deindex adult games is a reminder of the ongoing tug-of-war between content creators and platform policies. While the move aims to create a more navigable marketplace, it also underscores the need for clear, fair guidelines that respect both creators and consumers. As digital spaces continue to evolve, the challenge remains: how to foster an environment that celebrates creativity while maintaining a sense of community and respect. As we watch these developments unfold, one thing is clear: the conversation about content moderation is far from over, and its impact on creators and consumers alike will be felt for years to come.

Read more about AI in Business

Read more about Latest Sports Trends

Read more about Technology Innovations