Sam Altman Says It’ll Take Another Year Before ChatGPT Can Start a Timer / An $852 billion company, ladies and gentlemen.

Images chosen by Narwhal Cronkite

Sam Altman Admits ChatGPT Can’t Start a Timer: The $852 Billion Puzzle

In an era where AI promises to transform everything from healthcare to education, it might come as a surprise that one of the world’s most advanced chatbots, ChatGPT, cannot perform one of humanity’s oldest and simplest tasks—keeping time. Speaking recently on Mostly Human, OpenAI CEO Sam Altman acknowledged that it will take “another year” before ChatGPT’s voice model can start a timer—or perform timekeeping in general—with any level of reliability. That’s right: the flagship product of an $852 billion company still struggles with what appears to be a trivial task.

On one hand, Altman’s honesty reflects OpenAI’s transparency about technical limitations, but on the other, it raises important questions about the gap between AI’s dizzying promise and its practical functionality today. Let’s unpack why AI struggles with tasks like this, why that matters, and what OpenAI’s acknowledgment tells us about the future of intelligent systems.

Animated concept art showing ChatGPT struggling to handle basic tasks like starting a timer

Why AI Struggles with Time: A Fundamental Problem

To understand why something as seemingly basic as keeping time continues to elude AI systems like ChatGPT, one must consider how these models work. At their core, they are probabilistic systems designed to generate responses based on patterns in data, not deterministic machines programmed for task-specific outcomes. As Altman puts it, “We will add the intelligence into the voice models,” but that is much easier said than done.

Large language models like ChatGPT are trained on vast datasets from the internet, which include textual knowledge but lack the embedded logic required for tracking dynamic, real-world processes. Timing a task depends on understanding both duration and progression—a challenge compounded by the static nature of current neural networks. Even when asked seemingly straightforward questions about elapsed time, ChatGPT often resorts to “hallucinations,” generating arbitrary answers rather than executing precise calculations.

For context, this is not just a quirk of ChatGPT but a problem endemic to most AI systems. According to Gizmodo, AI models also struggle with recognizing time on clocks from images, displaying the correct hour in AI-generated visuals, or even accurately recounting the length of a past conversation. The technical hurdles are rooted not just in concept but in computational design.

Illustration or chart explaining how GPT models process tasks versus humans (e.g., training data and response generation frameworks)

Perception Versus Performance: Bridging the Expectations Gap

For many, Altman’s admission reflects a growing dilemma in the AI world: perception outpacing performance. With ChatGPT and its peers often touted as hallmarks of advanced intelligence, it’s easy to lose sight of their fundamental limitations. When asked by Laurie Segall during the interview whether he would relay this issue to his team, Altman’s terse “No, no, that’s a known issue” hints at the operational blind spots even cutting-edge firms like OpenAI navigate daily.

Tech enthusiasts have had a field day with this disconnect. A viral TikTok video by @huskistaken encapsulates this frustration by humorously exposing the chatbot’s struggles. In the clip, ChatGPT not only fails to time a runner but also insists it did so correctly—despite clear evidence to the contrary. “AI is amazing until it’s not,” quipped one industry analyst reviewing the video. The situation escalated when Husk played Altman’s reaction back to ChatGPT itself. Its response? A bizarre assertion that “some voice models might not have all the capabilities, but I do.”

This back-and-forth illustrates a key challenge in AI deployment: user trust. If customers start perceiving AI systems as untrustworthy or needlessly complex for trivial tasks, that could slow uptake in more ambitious applications, ranging from autonomous vehicles to medical diagnosis tools. Systems like ChatGPT are still seen as novelties in some quarters rather than indispensable tools—an issue OpenAI must resolve if it hopes to maintain market dominance.

From Hype to Humility: Where Are We on the AI Curve?

The timer controversy may seem trivial, but it is emblematic of broader issues in AI today. Many experts argue the industry is somewhere between peak hype and the inevitable trough of disillusionment, as users confront the practical limits of technologies painted as near omniscient. While OpenAI remains a behemoth—valued at a staggering $852 billion—its challenges echo those faced by rivals like Google DeepMind and Anthropic.

A recent piece in Business Insider highlights this tension. OpenAI has scaled rapidly, yet executives like Fidji Simo, who oversees product applications, now face Herculean tasks: proving the platform’s profitability while keeping promises about its world-changing potential. The longer it takes to resolve these functionality gaps, the greater the risk that competitors will gain ground. For instance, DeepMind’s “Project Mario” has prioritized AI safety and governance, demonstrating how strategic pivots could yield tangible results that resonate with public stakeholders.

Software engineers working on a complex AI interface in a futuristic lab-like setting

What’s Next: Key Implications and Questions to Watch

Altman’s acknowledgment sets the stage for what could be a transformative year—or a deeply humbling one—for OpenAI. If basics like timekeeping remain unresolved, public trust could waver, and scrutiny will likely intensify. Policymakers, already alerted by OpenAI’s recent letter alleging “coordinated attacks” by figures like Elon Musk and Meta, will have fresh ammunition to explore. Any perceived lack of progress could also alarm investors accustomed to year-over-year growth benchmarks.

On the flip side, solving this seemingly minor hiccup could carry outsized significance. A functional timer might not move markets, but it would alleviate nagging doubts about broader system reliability—a key factor for expanded adoption in industries such as education, finance, and healthcare. Furthermore, advancements geared toward solving the “time problem” could yield breakthroughs elsewhere, including scheduling, logistics, and other time-sensitive applications.

For everyday users, the debate around AI maturity remains front and center. The next 12 months will likely reveal not only whether OpenAI can hit milestones like enabling a basic timer but whether these incremental developments lead to a system that feels more truly “intelligent.”

Final Thoughts: Signs of Progress or a Sobering Reality?

Sam Altman’s frank comments on ChatGPT’s limitations are a reminder that building artificial general intelligence (AGI) remains a marathon, not a sprint. While OpenAI’s advancements are undeniably impressive, they’re still works in progress—subject to the same growing pains and setbacks that have always come with technological revolutions.

For all the promise of AI in reshaping society, it’s the quieter, finer details that remain the test of credibility. In the months to come, AI adoption might hinge on these very details—whether they manifest as reliable timekeeping or entirely new functionalities. Either way, it’s a race against time, and ironically enough, ChatGPT still can’t clock it.

0
Show Comments (0) Hide Comments (0)
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x