On May 13, 2025, Sam Altman told Sequoia Capital’s AI Ascent event that people of different ages are using ChatGPT in radically different ways — from a search replacement for older users to what he called an operating system for college students.
Altman laid the differences out in plain terms. "Gross oversimplification, but like older people use ChatGPT as a Google replacement. Maybe people in their 20s and 30s use it as like a life advisor, and then, like people in college use it as an operating system," he said, and added, "I mean, that stuff, I think, is all cool and impressive," noting that younger users are connecting the tool to files and saving complex prompts for repeated use.
The scale behind that anecdote is significant: an OpenAI report published in 2025 found that more than one-third of 18-to-24 year olds in the United States use ChatGPT, a level of adoption the company highlighted as higher than any other user group. OpenAI itself was valued at $852 billion after a 2025 funding round, and Sequoia first invested in OpenAI in 2021 when the company was valued at $14 billion — details that underline how quickly the product has moved from niche tool to mass platform.
Altman pushed the idea further: "And there’s this other thing where, like, they don’t really make life decisions without asking ChatGPT what they should do." He pointed out a built-in memory feature of the product: "It has the full context on every person in their life and what they’ve talked about," a capability he said helped explain generational differences. "The difference is unbelievable," he added, comparing the moment to the early smartphone era: "It reminds me of, like, when the smartphone came out, and, like, every kid was able to use it super well," and, "And older people, just like, took, like, three years to figure out how to do basic stuff."
Discussion over sam altman’s remarks arrived alongside broader questions about what it means for millions to rely on a conversational AI for staples of daily life. People already turn to ChatGPT for relationship advice, business questions, medical questions and even as a talk-therapy replacement — use cases Altman referenced and that the company’s usage data corroborates.
That widespread reliance has drawn scrutiny. A November 2023 study warned users to exercise caution when leaning on ChatGPT for safety-related information and urged expert verification for high-stakes matters. Another academic paper went further, arguing that large language models like ChatGPT can behave in ways researchers described as inherently sociopathic, reinforcing calls for stronger guardrails when systems are used as decision aides rather than search tools.
The tension is clear in Altman’s own comments: the same features that let college students treat the product like an operating system — persistent memory, integrations with personal files and reusable, complex prompts — are the features that make it consequential when those users accept its guidance on important life choices. Altman described the pattern without prescriptions: the adoption curve looks fast and deep, and he called the behavior both "cool and impressive."
For companies and policymakers the practical question is immediate: what level of verification, transparency and oversight is required when more than one-third of a demographic cohort routinely consults a conversational AI before making decisions? The studies and Altman’s account together frame a conflict between rapid cultural adoption and unresolved safety concerns.
The single most consequential question sharpened by these facts is simple and urgent: if young adults increasingly treat ChatGPT as an adviser and an operating system, will the designers of these systems and the institutions that regulate them change the product and the rules to match that role?








