
What do you think is the most valuable asset in today’s world? Surprisingly, it’s not oil, advanced computing, or cryptocurrencies like Bitcoin. These items are merely inert without the human attention that infuses them with value. Attention has emerged as the most powerful and coveted resource, influencing everything from the demand for rare minerals to the intricacies of international trade policies. A well-placed advertisement, like one for an iPhone, can transform mere digital data into a ripple effect of labor, resource extraction, and geopolitical influence. The mastery of the physical realm becomes a simple task when one can effectively harness attention on a large scale.
The boundary between our minds and the attention economy is fading
Today, corporations, wealthy individuals, and soon artificial agents are in a fierce competition to seize and channel human attention for their own purposes. For much of the last three decades, attention was captured through clunky, broad methods—think print media, radio, and television. These tools, despite their inefficiencies, were powerful enough to mobilize entire populations, orchestrate societal change, and even facilitate the dehumanizing experiences of total warfare. The introduction of the iPhone marked a significant leap forward in capturing attention. It acted as a cognitive extension, connecting individuals to a global network like never before.
Fast forward to now, less than 17 years after the iPhone’s debut, and we are on the brink of another transformative leap. Imagine a future where corporations view your mind as an extension of their data systems, mining your attention directly from your neural activities for profit while selling you back a façade of control. We could be heading towards a reality where our experiences are “enhanced” by limitless artificial worlds and sensations.
“The sky above the port was the color of television, tuned to a dead channel.”
This quote from William Gibson’s 1984 novel, Neuromancer, eerily resonates with the trajectory our society is taking. Privacy is becoming obsolete, with data being hoarded by large corporations as a commodity. Our collective reality may soon resemble a “consensual hallucination,” experienced by billions who are neurally connected to cyberspace, dominated by corporate interests and built on a precarious technological infrastructure that is often hacked and manipulated. The release of seemingly innocuous technologies, such as the Apple (AAPL) Vision Pro and Orion AR, brings us ever closer to this unsettling reality. These devices possess advanced hardware that connects our intentions and thoughts to the digital realms they create.
Want attention? Focus on the eyes
The Apple Vision Pro builds on the legacy of Google (GOOGL) Glass, establishing a feedback loop that responds to our gaze by tracking eye movements. Its array of internal sensors can detect arousal, cognitive load, and emotional states through minute changes in pupil size and rapid eye movements. For instance, pupil size can serve as a direct indicator of noradrenergic tone, reflecting sympathetic nervous system activity governed by neurotransmitter output from the locus coeruleus, a brain region associated with attention and arousal. While its current applications may seem limited, the technology’s capability to eliminate the need for external devices by utilizing the gaze as a means to navigate digital environments is undeniably remarkable.
You’re being molded, just like the molders
Despite these technological breakthroughs, it’s essential to understand that there are costs involved. Devices like the Vision Pro are subtly conditioning society for a future where more invasive technologies—such as Neuralink or other brain-computer interfaces—could undermine human autonomy entirely. Currently, economic incentives do not prioritize privacy, individual rights, or digital freedoms. Why? Because the economy thrives on structuring human behavior around stable and lucrative markets, which often revolve around aspects like sex, social status, and security.
These markets depend on memetics and the focused attention of organized groups rather than the self-sufficient, independent individual. If this grim reality comes to pass, any technology that links personal decision-making with open information systems will likely favor those who stand to profit the most—corporations, governments, and increasingly sophisticated artificial agents. The primary beneficiaries may not include the readers, authors, or the majority of users today. Instead, artificial agents might emerge as the true winners, optimizing their operations for objectives that could alienate the very humans who developed them. This is the path we are on unless proactive measures are taken.
A call for a biometric privacy framework
There is a widespread consensus that privacy is essential. However, despite frameworks like GDPR, CCPA, and pioneering initiatives such as Chile’s Neuro Rights Bill, the underlying issues remain unresolved and the risks continue to grow. While regulation and policy are essential, they are not enough without effective implementation.
A foundational embedding of digital natural rights into the very fabric of the internet and connected devices is crucial. This begins with simplifying the process for individuals to create and maintain self-custody of their cryptographic keys, which can safeguard communications, verify identities, and protect personal data without reliance on corporations, governments, or third parties.
Holonym’s Human Keys exemplify such a solution. By empowering individuals to create cryptographic keys securely and privately, we can shield sensitive information while upholding privacy and autonomy. The brilliance of Human Keys lies in the fact that no single entity—be it a corporation, individual, or government—needs to be trusted for their creation or use.
Integrating technologies like homomorphic encryption with devices such as the Apple Vision Pro or Neuralink could revolutionize cognitive abilities without sacrificing the privacy of sensitive user data.
However, it’s vital to recognize that software solutions alone are insufficient. Secure hardware that adheres to publicly verifiable and open standards is necessary. Governments should play a pivotal role in ensuring that manufacturers adhere to stringent security protocols when designing devices that manage and store cryptographic keys. Just as clean water and breathable air are public necessities, secure hardware for key storage should be recognized as a public good, with governments responsible for ensuring its safety and accessibility.
As we look toward the future of ethical neurotechnology, we must heed the warnings of visionary thinkers like Gibson, who cautioned against the erosion of privacy, autonomy, and humanity by technology. Brain-computer interfaces (BCIs) hold the potential to enhance human capabilities, but only if guided by ethical considerations. By embedding biometric privacy into the core of our digital systems, utilizing tools like self-custodial keys and homomorphic encryption, and advocating for open hardware standards, we can ensure that these technologies empower rather than exploit individuals.
We don’t have to accept a dystopian future. Instead, we can envision a landscape where innovation uplifts humanity, safeguarding our rights while unlocking new possibilities. This vision is not merely optimistic; it is essential for crafting a future where technology serves our needs rather than dominates us.