The Current

The Current #1: A Case Study in Consumer Gen AI Trust

by Hunter WorlandApr 09, 2024

The Current is a bi-weekly series from NEA on the developments impacting consumer technology. Each installment examines a trend, disruption, or opportunity with consumer data. Posts are concise, informative, and always current.


The New York Times is generating its own headlines about generative AI lately. The ongoing legal battle between The Times and OpenAI centers on copyright law, but it also prompts a larger question about generative AI interactions. How do users value answers relative to the source? 

Our consumer panel answered this question with regard to the news media, but the insights apply to broader AI applications — AI tutors, financial copilots, health clinicians, shopping assistants, really any interaction where consumers trust generative AI output without total transparency into its inputs. 

I see consumer preference for how AI applications accredit information as a gradation of sensitivity — from decisive to indifferent — roughly categorized across four grades: 

  • Decisive sensitivity: Refusal to use generative AI platforms for news consumption, due to lack of trust relative to traditional media

  • Selective sensitivity: Selective trust depending on the specific source the generative AI platform trained on (e.g., the New York Times but not Fox News)

  • Conditional sensitivity: Use of generative AI platforms for news consumption conditional on training on generally trusted data sources (without distinction of one source relative to another)

  • Indifference: Unconditional, comprehensive trust in generative AI platforms for news consumption

Our survey panel, when asked in plain English, is overwhelmingly conditional or indifferent. My interpretation of the collective response, also in plain English, is: just give me the answer.

But certainly, trust varies by inquiry. With a digestible but distant media subject held constant, we progressed our panel from a factual question to granular one to a subjective to a projective – a case study within a case study

Our data shows trust erodes as questions become more subjective, hypothetical, and negative. But distrust, even when overwhelming, is relative. We asked the same set of questions again – but substituting a generic AI platform for the respondent’s preferred news outlet, like the Times. Readers might have to squint to see the difference:

The consumer perspective can inform product decisions across applications beyond news. A fintech copilot, for instance, might need to cite sources or trigger human confirmation when making projections or fielding subjective inquiries. A mental health AI clinician might need to supplement its responses with clinical data to improve trust when answering granular medical questions.

Reach out to to continue the conversation.