A gentle ping sounds on a pair of smart glasses during a morning walk to work. The glasses’ artificial intelligence (AI) system detected a sidewalk closure long before it could be seen, suggesting a reroute. The next notification is about a coffee shop on the new path, noting that it has only a three-minute wait. That reroute won’t derail the morning coffee run or make the wearer late for work.
Inside the coffee shop, the glasses display the barista's name. It turns out she’s a neighbor, three doors down. The wearer had no idea she worked there, but the quiet nudge prompted eye contact and a friendly greeting. Thanks to the personal connection, the coffee arrives a little sooner.
Modern smart glasses can answer questions, send text messages and record video. But they aren’t yet a source of intelligence that fully anticipates the information you need.
Superintelligence is what many technologists consider a smarter, proactive AI. The premise is that AI will become so intelligent and capable, it could outperform humans on almost any task.
Many believe AI can be beneficial for health, to improve well-being, help cure disease and play a larger role in everyday life. Others, however, worry about overstating the benefits. They’re wary of potential privacy concerns arising from pairing a powerful computing system with the large-scale data collected from wearable devices such as smart glasses.
There’s a growing interest by people who want to personally experience advanced forms of AI to utilize smart glasses and superintelligence.
What is superintelligence?
Technologists (people who specialize in emerging technologies) define superintelligence as a highly advanced form of AI that exceeds human performance in many areas.
The AI systems that have become popular in the workplace and for personal use are powered by large language models (LLMs). LLMs are trained on massive amounts of data and can understand and generate language, images and videos. Computer scientists debate when, or if, AI systems will become advanced enough to meet the superintelligence threshold.
Behind the race to superintelligence
There isn’t universal agreement on the exact threshold to achieve superintelligence. Some technologists argue that it will be obvious if and when it happens. Others say it’s not even possible.
Ryan Watkins, PhD, professor of educational technology and associate director of the Trustworthy AI Initiative at the George Washington University, studies human-technology collaboration and how AI systems can be designed and deployed responsibly. He argues that it’s difficult to apply strict definitions to AI advancements, given that we’re looking at them through a human lens. Instead, what’s more important is how useful people find AI in practice.
“When you're home with a sick pet and you want to know if a rash is causing their illness, you take a picture and throw it into an AI system and it tells you — does it matter at that point whether philosophers consider it true intelligence? Absolutely not,” Dr. Watkins said. “You just want an answer and if the answer is good, you're happy. The semantics aren't really important when it's useful.”
It’s worth noting that when it comes to asking LLMs for health information, users need to be cautious about hallucinations, which is when an AI system presents false information as accurate. LLMs may offer a good baseline or starting point for research, but a doctor or other human expert is best for a diagnosis or discussing a serious health matter. Fortunately, according to Dr. Watkins, as AI advances, such hallucinations are becoming more limited to “edge cases” beyond a model’s knowledge base.
For most day-to-day use, wearable devices like smart glasses show promise for changing the dynamics of how people interact with AI systems, according to Bob O’Donnell, president and chief analyst of TECHnalysis Research.
With AI, smart glasses can answer questions, give a user real-time updates and provide context about one’s environment. These applications can help people who are visually impaired navigate their world with more independence.
O’Donnell points to the brain's reliance on sight as the underlying logic. The percentage of human understanding that depends on visual input, he says, makes the case on its own. “It just makes logical sense to incorporate that into a computing environment as well,” he said.
O’Donnell said the appeal comes with complications. Having a camera always at the ready to record raises privacy questions that society will need to work through. “Some of it’s a little creepy,” he said. “And we have to deal with that.”
Ongoing scientific research on brain function points to vision’s essential role in how people interpret their environment. According to one published review, researchers describe vision as the primary entry point for environmental input into the human brain. The data the brain collects is linked closely to perception, movement, cognition and social functioning.
Survey research suggests that emphasis is well-placed. A 2019 study found that UK adults ranked sight as their most valued sense, and U.S. survey data have similarly found that many adults view vision as especially important to overall health and quality of life.
Globally, the definitions may be unsettled, but some researchers have attempted to add more precision to the terminology around AI. A 2025 review published proposed definitions for advanced forms of artificial intelligence that could be used as a standard.
Artificial narrow intelligence, or narrow AI, covers today’s common AI systems. The AI chat tools that millions now use to answer questions, summarize documents or draft emails fit in this category. This also applies to smart eyewear, where an AI system works within its dedicated environment.
Artificial general intelligence, or AGI, is a hypothetical form of AI capable of human-level thinking across a wide range of domains.
Artificial superintelligence, also referred to as ASI, is the highest theoretical stage — a form of AI that would surpass human ability across virtually every domain of human activity, from scientific reasoning to social interaction. This term is often synonymous with superintelligence.
While these categories may offer a framework for understanding AI’s trajectory — researchers, technologists and AI companies do not agree on firm lines between these stages –– or even what they should be called.
Today, wearable technology like smart eyewear offers one of the most personalized and accessible ways to experience this technology.
The growing market for smart glasses
Some optometrists interviewed believe that the interest and adoption of smart glasses will change how they approach patient care.
Aarlan Aceto, OD, professor and program coordinator of Ophthalmic Design and Dispensing at CT State Community College, Middlesex, leads a program training future opticians. He believes eye care professionals will need to be familiar with smart glasses’ features as more patients will seek advice. He sees smart glasses, and the increasing capabilities of AI within them, drawing an entirely new kind of patient into the practice.
"We're going to get more people in the pipeline as patients, not because they can't see or think they need eye care, but because they want the technology. It just happens to come from an eye care professional," Dr. Aceto said. As more consumers get curious, eye care providers need to understand the technology and possibilities.
Joseph Allen, OD, FAAO, is an optometrist, founder of BetterEyeHealth and popular social media educator. Dr. Allen has used and reviewed multiple smart glasses devices and sees the technology at a turning point. "The capabilities seem like something out of a science fiction film, but it feels more like this is already here," he said.
Consumer interest is building. A 2025 report found that 58% of consumers said they were familiar with smart eyewear. Four in 10 said they would consider purchasing in the next 12 months, while 14% reported having already bought a pair.
Yet the category still faces obstacles, with half of respondents saying they did not yet see a clear need or purpose for the technology, and 41% citing cost as a barrier.
At a recent vision conference, Dr. Allen said that multiple companies demonstrated displays offering real-time translation, object identification and augmented reality.
Smart glasses may prompt a larger conversation about vision health. Dr. Allen pointed to a parallel from 2011, when a popular handheld gaming device with a 3D display led to a wave of children visiting eye clinics with headaches and eye strain. He said the device didn’t cause underlying vision problems; it instead brought some existing binocular vision problems to the surface.
Dr. Allen believes that as more people use smart glasses, there could be a similar increase in reports of undiagnosed vision problems coming to light.
Dr. Aceto sees the convergence of AI and smart eyewear as an important inflection point for the profession. Eye care providers who learn about AI and smart devices can guide patients in using this technology. In an ideal scenario, people would consult their optometrist, ophthalmologist or optician with questions rather than buy a device online or from a consumer electronics store.
What’s ahead for superintelligence and AGI
Whether AI can achieve superintelligence is one of the most contested questions in the field. O'Donnell, who tracks the consumer technology market closely, sees the terminology itself as part of the problem.
"There's a lot of baggage associated with that phrase," he said. "Depending on who you talk to, there are a lot of people who say that's ridiculous, we're never going to get there, because it implies a level of independent thinking that machines can't do."
Models are advancing quickly, though progress is uneven. A 2025 report from Stanford’s Institute for Human-Centered Artificial Intelligence found that AI model performance rose sharply on industry benchmarks. However, some models still have gaps in their performance on complex reasoning tasks.
Dr. Watkins takes a pragmatic view of the uncertainty. The definitions may never be settled, he says, but that may not matter. What counts is whether people experience AI as intelligent. By that measure, a more important threshold of artificial intelligence has already been crossed.
“Maybe we already have it and just aren't willing to admit it,” he said. “Having technology that’s interesting and useful is the critical component.”
What is clearer, according to Dr. Watkins, is what comes next. AI researchers are developing what are known as world models. These systems are designed to understand three-dimensional space, not just language. A world model would understand its physical surroundings and respond accordingly. He expects meaningful advances within one to two years.
As displays evolve and AI becomes more context-aware, the ways people interact with these devices are also expected to expand, moving beyond voice commands toward gesture control and other hands-free inputs that keep the wearer's attention in front of them.
“That's really soon,” Dr. Watkins said, “when you think about how far we've come since 2020, when people thought this was 20 years off.”
For smart glasses, world models are a critical leap. They understand physical surroundings. By recognizing context in real time, they can share information before being asked. That level of awareness begins to resemble what technologists mean by the term superintelligence.
Whether that version arrives in two years or 10, the direction is clear. "It would be hard to go back," Dr. Watkins said. "You get used to it really fast."
READ NEXT: Can wearable technology help people monitor eye health?







