Web Summit Vancouver: Gary Marcus on AI Limitations and Risks

Reviewed by Ali Aleali, CISSP, CCSP · Last reviewed March 18, 2026

Key Takeaways from the Web Summit Keynote: A Reality Check on the AI Hype

AI dominated the conversation at this 2025's Web Summit, and for good reason. But amid the enthusiasm, one keynote stood out for its critical, grounded perspective on where the current AI landscape is falling short, and where meaningful progress might actually happen. Below are the summarized insights from the talk, which pulled no punches when it came to the limitations of large language models (LLMs) and the risks of inflated expectations.

Key Criticisms of Current AI

Marcus's central argument is that current LLMs represent just one narrow slice of what AI could eventually become, and the industry's fixation on this single approach is creating blind spots. A fundamental flaw in today's models is their lack of structured database records, which makes them inherently prone to hallucinations. Marcus described the current state of prompting as a "prompt and pray" dynamic, where even a well-crafted prompt offers no guarantee of a reliable output.

The hallucination problem is particularly stubborn. Despite more than 25 years of research into the underlying architectures, hallucinations remain common. And the economics raise their own questions: roughly $500 billion has been spent on AI development, with only about $7 billion in actual revenue generated to date. That gap between investment and return deserves serious scrutiny.

Technical Limitations

Marcus characterized LLMs as "autocomplete on steroids," a system that excels at pattern matching but lacks true reasoning or logic. Unlike traditional computer systems that can perform formal proofs (the kind used, for example, to verify Windows device drivers), LLMs operate without any formal reasoning capability. Their limited context buffers lead to inconsistent and unpredictable output, and the strategy of scaling models with more data appears to be hitting diminishing returns, as evidenced by reported hurdles in GPT-5 development.

Security is another concern. Marcus pointed out that even well-regarded models have been found vulnerable to prompt injection and jailbreak attacks, highlighting a category of risk that organizations adopting AI tools should weigh carefully.

Alternative Approaches

Marcus made a strong case for neuro-symbolic AI, a hybrid model that combines neural networks with symbolic logic. This approach offers several advantages over pure LLM architectures: better interpretability, easier debugging, the ability to enforce explicit constraints, and the option to directly program certain behaviors. AlphaFold's breakthrough in protein folding was cited as a compelling example of what hybrid approaches can achieve when neural and symbolic methods work together.

Market Implications

The financial picture around AI raises familiar warning signs. Marcus drew comparisons to WeWork, pointing to inflated valuations (OpenAI being a prominent example) and questioning whether the current trajectory is sustainable. A survey of 7,000 companies revealed no measurable bottom-line impact from AI adoption. A price war among AI providers is emerging, and even Nvidia's growth, while still substantial at 20%, has slowed significantly from its earlier explosive trajectory.

That said, Marcus acknowledged several high-value use cases where AI is delivering real results. Coding assistance, where hallucinations can be caught and corrected through testing, is one. Scientific research and drug discovery are others, particularly in biology where AI models have accelerated work that would have taken years by traditional methods.

The risks, however, are equally significant. Misinformation at scale, cybercrime potential (a topic we explored in our piece on AI governance in modern GRC), disruption to education systems, and the growing sophistication of deepfakes and media manipulation all demand serious attention from leadership teams.

About Truvo Cyber

Truvo Cyber works with Fintech and SaaS companies to build effective security programs that satisfy SOC 2, ISO 27001, and other compliance frameworks, with up to 80% automation. The result is a security posture that unblocks deals, builds client trust, and runs without consuming CTO bandwidth. Organizations can see a live Trust Center in action at trust.scrut.io.

To start a conversation about building an effective security program, connect with Ali on LinkedIn or call 613-683-0564.

Ready to Start Your Compliance Journey?

Get a clear, actionable roadmap with our readiness assessment.

Share this article:

About the Author
Ali Aleali
Ali Aleali, CISSP, CCSP

Co-Founder & Principal Consultant, Truvo Cyber

Former security architect for Bank of Canada and Payments Canada. 20+ years building compliance programs for critical infrastructure.