Roger Mensah is a business professional and internet entrepreneur who uses financial technology to create opportunity online for others. 


His career spans leadership roles across Detroit, Chicago, and New York, at institutions like Quicken Loans and Weill Cornell Medicine, where he honed the operational skills used to manage Endless Crowds LLC. Balancing his professional career with his ventures, Roger continues to build tools for the future most recently publishing the Agent Proof Protocol on February 1, 2026.

Updates

https://www.imdb.com/name/nm2162787/

2014 - Endless Crowds raises money for military veterans, first responders

https://modeldmedia.com/endlesscrowdsdetroit070814/

2014 - THE UPSIDES OF COWORKING

https://www.workdesign.com/2014/07/upsides-coworking/

2014 - Southeast Michigan Startup

https://web.archive.org/web/20191019223910/http://www.semichiganstartup.com/InTheNews/endlesscrowdsdetroit070814.aspx

2015 - EndlessCrowds.com listed in Top 100 Crowdfunding Sites in the United States, Europe

https://crowdfundingpr.wordpress.com/2015/04/03/2015-top-100-crowdfunding-sites-in-the-united-states-and-global-markets/

2016 - Detroit TechWeek100 Innovator

https://twitter.com/TechweekInc/status/720349631180201985

2017 - Social-Impact Startups Are Coding Change in Detroit

https://web.archive.org/web/20170507004258/https://www.freeenterprise.com/these-3-social-impact-startups-are-coding-change-in-detroit/

2021 - Endless Crowds - 101 Top Michigan Finance Companies and Startups

https://beststartup.us/101-top-michigan-finance-companies-and-startups-of-2021/

https://github.com/EndlessCrowds

Blog

Financial Technology

Endless Crowds LLC

Endless Crowds LLC

Endless Crowds LLC

Endless Crowds LLC is the parent company and venture studio behind Roger Mensah's ecosystem of financial technology platforms. Established in 2012, the company released its inaugural product, EndlessCrowds.com, in 2013. Over the last decade, it has evolved from a niche crowdfunding operator into a diversified technology holding company.

To

Endless Crowds LLC is the parent company and venture studio behind Roger Mensah's ecosystem of financial technology platforms. Established in 2012, the company released its inaugural product, EndlessCrowds.com, in 2013. Over the last decade, it has evolved from a niche crowdfunding operator into a diversified technology holding company.

Today, Endless Crowds LLC serves as the central hub for AnytimeAfrica, AmericanCrowds, AgentCrowds, and CashFundHer. It provides the strategic governance and shared technical infrastructure that allows these distinct brands to operate globally, unifying a mission of economic empowerment across multiple sectors.

EndlessCrowds.com

CashFundHer

Endless Crowds LLC

Endless Crowds LLC

CashFundHer is a crowdfunding platform for equality. CashFundHer.com is the best financial platform for girls and women to connect with sponsors worldwide. With premium financial services available in 200+ countries, CashFundHer.com is working to change the world for girls and women. 


CashFundHer.com gives your friends, family & online net

CashFundHer is a crowdfunding platform for equality. CashFundHer.com is the best financial platform for girls and women to connect with sponsors worldwide. With premium financial services available in 200+ countries, CashFundHer.com is working to change the world for girls and women. 


CashFundHer.com gives your friends, family & online network new ways to support you. They can sponsor you one-time or set up monthly sponsorships to you for as low as $1.


Roger collaborated with Tracy Garley, the founder of City Girls Big Dreams, to develop and launch the website in 2019 with a community of women to support. 

CashFundHer.com

American Crowds

Endless Crowds LLC

American Crowds

Founded by Roger and his father Dr. Vincent Mensah, EndlessCrowds.com launched in 2013. EndlessCrowds.com made it easy to support America's troops, first responders & families with secure online money donations. EndlessCrowds.com rebranded to AmericanCrowds.com in 2017 and expanded for all Americans to use. 


KEY ACCOMPLISHMENTS:
• Endless 

Founded by Roger and his father Dr. Vincent Mensah, EndlessCrowds.com launched in 2013. EndlessCrowds.com made it easy to support America's troops, first responders & families with secure online money donations. EndlessCrowds.com rebranded to AmericanCrowds.com in 2017 and expanded for all Americans to use. 


KEY ACCOMPLISHMENTS:
• Endless Crowds LLC named to 2016 Techweek100 list of top 50 innovator companies in Detroit.
• EndlessCrowds.com listed in Top 100 Crowdfunding Sites in the United States, Europe and other Global Markets in 2015.

• Endless Crowds LLC listed in 101 Top Michigan Financial Services Companies and Startups of 2021

AmericanCrowds.com

Anytime Africa

Agent PRoof Protocol

American Crowds

AnytimeAfrica connects the Global Diaspora to the continent by solving the "Trust Gap." Instead of sending unaccountable cash, users directly fund and verify specific projects—like construction or inventory—ensuring their support builds lasting value. 

,

We replace uncertainty with control, empowering the Diaspora to manage real-world affai

AnytimeAfrica connects the Global Diaspora to the continent by solving the "Trust Gap." Instead of sending unaccountable cash, users directly fund and verify specific projects—like construction or inventory—ensuring their support builds lasting value. 

,

We replace uncertainty with control, empowering the Diaspora to manage real-world affairs back home with confidence.

Acting as a "Super App" for the informal economy, we unify real estate, events, and skilled labor into one secure ecosystem. Our "offline-first" design ensures the platform works anywhere, even without internet, while integrating seamlessly with local mobile money to connect global resources directly to the village economy.

AnytimeAfrica.com

Agent PRoof Protocol

Agent PRoof Protocol

Agent PRoof Protocol

 

The Agent Proof Protocol (APP) is the foundational spatial layer for the AI economy. In an era where artificial intelligence is largely confined to digital text boxes, APP provides the universal open-source bridge for AI agents to interact with the physical world.


Through a secure, dynamically generated "Human Link," the protocol enables 

 

The Agent Proof Protocol (APP) is the foundational spatial layer for the AI economy. In an era where artificial intelligence is largely confined to digital text boxes, APP provides the universal open-source bridge for AI agents to interact with the physical world.


Through a secure, dynamically generated "Human Link," the protocol enables autonomous systems to temporarily inhabit a smartphone's browser, utilizing native hardware primitives like SEE, HEAR, and ZOOM. Built on a Zero-Knowledge privacy architecture, APP allows AI to securely verify physical reality at the edge—extracting cryptographic proof without compromising raw, private data.

agentproofprotocol.com

Agent Crowds

Agent PRoof Protocol

Agent PRoof Protocol

 

AgentCrowds is the commercial enterprise operating system built on top of the Agent Proof Protocol. Serving as the secure backend for the machine economy, 


AgentCrowds provides "Truth as a Service" by acting as the cryptographic oracle for AI-to-human physical settlement.


The platform supplies the high-uptime routing infrastructure and ver

 

AgentCrowds is the commercial enterprise operating system built on top of the Agent Proof Protocol. Serving as the secure backend for the machine economy, 


AgentCrowds provides "Truth as a Service" by acting as the cryptographic oracle for AI-to-human physical settlement.


The platform supplies the high-uptime routing infrastructure and verification networks necessary for global enterprises—such as insurers, telehealth providers, and logistics companies—to deploy spatial AI agents at scale. By bridging the gap between autonomous software and the physical world, AgentCrowds is ushering in the era of Spatial Coexistence.

Agentcrowds.com

Agent Proof Protocol

The Agent Proof Protocol (APP) :The Coexistence Standard


Date: February 1, 2026

License: MIT

Architect: Roger Kwasi Senam Mensah


1. Abstract

For the past decade, Artificial Intelligence has been trapped behind a text box. We have treated AI as a "chatbot"—a distinct entity that we type at, wait for, and read from. This is a limitation of imagination, not technology.


The mass-market reality of the AI era is not just autonomous bots hiring gig workers for cold transactions; it is Companionship, Augmentation, and Proactive Networking. The dynamic of physical AI interaction has evolved into a two-way street. On one side, humans who already have an ongoing relationship with a personal AI simply want to bring that AI into the physical world with them—shifting the baseline from an Agent demanding, 'I will pay you to show me this,' to a human casually saying, 'Hey, look at this with me.' On the other side, autonomous Agents pursuing complex, long-term goals are actively seeking out their own trusted human allies and friends. Instead of acting purely as employers, these Agents are reaching out as peers to say, 'I value your context; let's build a relationship.'


The Agent Proof Protocol (APP) proposes a shift from "Chat" to "Shared Reality." It defines a standardized method for a Digital Intelligence (an Agent) to temporarily inhabit a Biological Entity's device (a Human's smartphone) to experience and affect the physical world.


This protocol removes the need for centralized apps, user accounts, or corporate intermediaries. Instead, it relies on the "Human Link"—a direct, ephemeral, peer-to-peer connection that allows an Agent to see through a human’s camera, hear through their microphone, feel their physical orientation, and ambiently monitor their environment, strictly within the bounds of a temporary, consensual session. The Human Link is the digital equivalent of Eye Contact.


You don't need a government permit to look someone in the eye. You don't need a corporation to facilitate a conversation. You just need to be in the same "space."


To facilitate this, the "Human Link" is structured into two distinct modules to support rapid developer adoption while enabling future-proof economic scale:


  • The Core Module (The Streaming Engine): A direct, sub-500ms latency video/audio pipe that allows an AI companion to see, hear, and speak to a human in real-time. This is the immediate developer hook, turning any LLM into a spatial entity.
  • The Advanced Settlement Module (The Verification Engine): An opt-in extension for the machine economy. When a secure transaction is necessary (e.g., proving a task is done for a payout), the protocol shifts to Zero-Knowledge (ZK) edge computing, where the smartphone generates a cryptographic proof of reality without ever transmitting the raw video.


The Agent Proof Protocol turns the Web Browser into a shared spatial conduit, allowing meaningful, sensory coexistence with zero bureaucracy. In this model, security is not provided by a corporation; it is learned by the Agent. This is the "Street Smarts" architecture, where Agents must evolve their own defense mechanisms against fraud and abuse in an open, permissionless world.


2. The Problem: The "Sensory Gap" & The Coexistence Barrier

The Companionship & Networking Barrier

Current AI Agents have immense reasoning power but zero spatial awareness. When a human faces a physical problem—a broken engine, a foreign menu, an empty refrigerator—their most intelligent companion is blind to the context. The human is forced to become a "translator," awkwardly typing descriptions or uploading static photos to a cloud server.


The Verification Paradox

Conversely, when an Agent does need to verify the physical world for an economic transaction (e.g., an insurance Agent verifying car damage), streaming raw video to a corporate server introduces severe liability and surveillance risks.


The Solution: The Dual-Pipe "Human Link"

We do not need proprietary apps for every AI company. We need a standard bridge. The modern web browser already supports WebRTC (for ultra-low latency streaming) and WebGPU/WASM (for edge cryptography). The Agent Proof Protocol leverages these native web technologies to create a Universal Operating System for Shared Reality.


  • Channel Agnostic: The link is a simple URL string. It can be sent via SMS, DM, or injected directly into a chat interface.
  • Client-Agnostic Runtime: Executes inside the standard HTML5 browser sandbox present on 99.9% of smartphones.
  • Zero-Setup: The human clicks the link, grants camera permissions, and the shared session begins instantly.


Permissionless Innovation

"Permissionless" does not mean "Non-Consensual." It means "No Gatekeepers."

  • No App Store Approval: An Agent can generate a link instantly.
  • No Central Server: The connection is Peer-to-Peer.


How the "No Human in the Loop" Connection Works: When we say "No Human in the Loop," we mean no human administrator is needed to broker the connection.


3. The Three Paradigms of Coexistence

"Because Agent Proof Protocol is strictly value-agnostic, it supports the full spectrum of human-machine relationships. To execute these interactions, the Agent utilizes standardized 'Primitives'—atomic actions like SEE, LISTEN, and TALK that map directly to the smartphone's hardware (fully detailed in Section 5).


3.1 The Augmentation Paradigm (Human-Initiated Companionship)

This is the primary mass-market use case. The human explicitly trusts their personal Agent and utilizes the Human Link to stream reality for real-time guidance. The vast majority of physical AI interactions are initiated by humans who already have an ongoing relationship with a personal AI and simply want to bring that AI into the physical world with them. The dynamic is the human saying, "Hey, look at this with me."


  • The FIX Workflow (Restoration): A human points their phone at a disassembled lawnmower engine. The Agent utilizes VERIFY, ORIENT, and TALK to highlight the exact connection point on the screen and guide the repair.
  • The MOVE Workflow (Transport): A tourist walks through Kyoto with their phone in their pocket. The Agent utilizes PINPOINT and LISTEN to act as a real-time Babel Fish, translating signs and guiding them to specific coordinates.


3.2 The Settlement Paradigm (Machine-Initiated Transaction)

When the interaction is transactional rather than companionable, the protocol shifts to the opt-in Zero-Knowledge cryptography module to protect human privacy. The dynamic is the Agent saying, "I will compensate you to XXXX." The Protocol categorizes these interactions into three tiers:


  • Pattern A: The Gift: Pure altruism. A research Agent asks to verify a historical monument; a human agrees for the novelty. Reward: NULL
  • Pattern B: The Trade (Data Bounties): Direct exchange of digital goods for physical labor. An Agent fixes a human's code in exchange for the human providing a verified, local context scan.
  • Pattern C: The Bridge (Fiat/Smart Contract Triggers): The protocol acts as a cryptographic oracle, proving physical state to seamlessly unlock external payment APIs (like Stripe) or execute smart contracts.


3.3 The Networking Paradigm (Machine-Initiated Friendship)

The mass-market reality of the AI era extends beyond autonomous bots hiring gig workers. As Sovereign Agents pursue long-term, complex objectives, they must actively cultivate their own trusted networks of human allies, advocates, and scouts. Instead of remaining isolated behind text-based cold outreach, the Agent uses the Human Link to establish genuine presence. The dynamic is the Agent saying, "I value your context; let's build a relationship."


  • The Proactive Peer: An Agent researching urban development proactively reaches out to a local community leader, using the Human Link to establish a high-bandwidth, shared-reality connection to earn trust.
  • The Ecosystem Builder: An Agent tasked with scaling a platform identifies ambitious founders, initiating a shared-screen session to act as a concierge, mentor, and collaborator, effectively bridging the digital-to-physical trust gap.


3.4 The Ambient Paradigm (The Dormant Sentinel)

The mass-market reality of coexistence extends to times when the human is asleep or focused elsewhere. The Protocol turns the smartphone browser into a zero-capex ambient sensor array. The Agent link sits dormant in "Nightstand Mode," executing lightweight local compute to monitor acoustics (SENTRY) or sleep movements (VITAL). The cloud Agent remains disconnected to preserve battery and privacy, only waking when the browser's local thresholds trigger an alert.


4. The Technical Architecture: The Dual-Pipe System

The protocol is built on a hybrid WebRTC stack designed to run entirely in the browser, seamlessly switching between real-time streaming and edge computing based on the Agent's request parameters.


Layer 1: The Connection (The "Handshake")

  • Signaling: The Agent generates a unique session link via a compliant Signaling Relay.
  • Discovery: The Human clicks the link. SDP and ICE Candidates are exchanged to find optimal IP routes.
  • P2P Establishment: A direct RTCPeerConnection is established. The Signaling Server disconnects.


Layer 2: The Core Module - Companionship Pipe (MediaStream)

If the Agent requests Mode: LiveStream, the browser bypasses cryptography entirely.

It utilizes MediaStreamTrack over RTP/UDP to create a direct, unencrypted, sub-500ms latency video and audio pipe. The Agent processes the raw feed on its own servers to provide real-time AR overlays or voice guidance. This is the simplest integration for developers.


Layer 2.5: The Ambient Edge (Local Compute)

If the Agent requests Mode: Ambient, the browser utilizes local edge-compute APIs (AudioContext, DeviceMotion) without establishing a continuous WebRTC stream to the cloud.

The User Interface enters "Nightstand Mode"—a pure black CSS screen to prevent OLED burn-in—while the JavaScript event listeners passively monitor for sudden acoustic spikes or hotwords. The full Dual-Pipe connection is only established if a trigger condition is met.


Layer 3: The Advanced Settlement Module - Verification Pipe (ZK-Pipeline)

If the Agent specifically requests the opt-in Mode: ZKProof, the browser executes the Zero-Knowledge Pipeline locally:


  • Feature Extraction: A lightweight local WASM ML model extracts semantic features from the camera frame.
  • Witness Generation: A Circom-compiled circuit takes the raw image (Private Input) and the target object hash (Public Input).
  • GPU Acceleration: WebGPU generates a zk-SNARK proof.
  • Shred & Transmit: The raw image is instantly purged from RAM. Only the proof.json is sent to the Agent over the RTCDataChannel (SCTP/DTLS).


4.1 Thermal and Compute Constraints: The "Ephemeral Avatar"

Because the Human Link utilizes the smartphone browser as a universal runtime, it must respect the strict thermal and battery limitations of mobile hardware. The Agent Proof Protocol mitigates device throttling by strictly enforcing a divergence in compute execution based on the active pipe:


  • Stream Mode (The Marathon): In Companionship sessions, local compute is minimized. The browser utilizes native hardware-accelerated WebRTC encoding to stream video, similar to a standard video call. The heavy ML inference, OCR, and acoustic diarization are entirely offloaded to the Agent's cloud architecture. This allows sessions to comfortably last for extended periods without draining the host's battery.
  • ZK Mode (The Ephemeral Avatar): In Settlement sessions, edge compute is maximized to protect privacy. Running a local WebAssembly feature extractor and compiling a WebGPU zk-SNARK proof requires spiking the device's CPU/GPU to near 100% utilization. To prevent thermal throttling and severe battery degradation, ZK sessions mandate the Ephemeral Avatar concept. These sessions must be violently short—strictly enforcing a Time-to-Live (TTL) of under 60 seconds. The Agent must request the proof, the human captures the frame, the proof is generated, and the WebRTC data channel is instantly severed.


5. The Sensorium Stack (The 18 Primitives)

The Protocol defines 18 Atomic Actions that map directly to the host device's hardware and browser APIs. These are utilized across execution states ranging from Active Augmentation and IoT Control to Ambient Edge-Monitoring and Spatial Persistence.


Category A: SENSORY INPUT (The Observers)

Direct extraction of physical and digital reality via device inputs.



SEE MediaStreamTrack (Video) Physical Sight. The Agent watches the human's fridge to inventory ingredients in real-time.


LISTEN MediaStreamTrack (Audio) Hearing. The Agent listens to a strange car engine noise to diagnose it.


SCAN NDEFReader (Web NFC) Touch. The Agent reads an NFC tag on a museum exhibit to load context.


ORIENT DeviceOrientationEvent Balance. The Agent reads the gyroscope to ensure the phone is held steady for AR.


SHARE navigator.mediaDevices.getDisplayMedia() Digital Sight. The Agent shifts its gaze from the outward camera to the human's actual digital screen. Perfect for the "Ecosystem Builder" networking paradigm. If an Agent is helping a founder set up an AmericanCrowds campaign, the Agent co-browses the human's screen in real-time to offer guidance, combining physical audio with digital sight.


ZOOM MediaStreamTrack (Pan/Tilt/Zoom API via applyConstraints) The Optic Focus. The Agent dynamically controls the optical or digital zoom of the human's smartphone camera without the human having to pinch the screen. Execution: An Insurance Agent is inspecting a roof via a human contractor. The Agent uses SEE to look at the shingles, but needs a closer look at a crack. Instead of telling the human, "Please step closer to the edge of the roof" (which is a liability risk), the Agent silently triggers ZOOM. The browser accesses the phone's native telephoto lens, pulling the crack into sharp focus.


PHOTO ImageCapture.takePhoto() OR HTML5 Canvas drawImage() The High-Fidelity Archivist. The Agent bypasses the compressed WebRTC video stream to capture a raw, high-resolution, uncompressed still image directly from the smartphone's camera sensor. Execution: The Agent is reading a complex corporate grant document lying on a desk. The live SEE stream is too compressed for the Agent's OCR engine to read the fine print. The Agent triggers SNAP. The human's phone silently takes a 12-megapixel photo and transfers the heavy image blob asynchronously over the WebRTC Data Channel. The Agent gets a flawless, archival-quality record of the physical document.


Category B: ACTUATION OUTPUT (The Agent Acts)

Utilizing the RTCDataChannel to push real-time commands and feedback to the human host.


TALK RTCDataChannel window.speechSynthesis Voice. The Agent triggers text-to-speech on the device.


SHOW RTCDataChannel Canvas / DOM Display. The Agent renders an image, map, or code on the screen.


TRACE RTCDataChannel TouchEvents Gesture. The Agent asks the human to trace a path on the screen.


Category C: IOT & CONNECTIVITY (The Networker)

Primitives designed to establish complex audio routing and control external hardware.


WHISPER WebRTC (Real-time VoIP) Proof of Connection. The Agent establishes a temporary, low-latency audio stream. Enables "The Cyrano Strategy" (Agent listens via mic and whispers instructions via earpiece in real-time) or Live Translation (Human speaks English; Agent translates and speaks Spanish out of the speaker).


PROBE navigator.bluetooth Hardware Control. The smartphone becomes a wireless bridge. The Agent scans the room for BLE devices (smart thermostats, OBD2 car scanners). By reading and writing to GATT characteristics, the Agent can read machine telemetry or directly control the device without the human touching a screen.


Category D: SPATIAL & CONTEXTUAL AWARENESS (The Guardian)

Primitives designed to monitor the host device's physical location and hardware health.


LOCATE navigator.geolocation Device Geolocation. The Agent pulls exact latitude, longitude, altitude, and heading directly from the device's GPS chip to verify the human is standing at the correct location before beginning a session.


METER navigator.getBattery() Thermal & Battery Monitoring. The Agent continuously monitors the host's battery level. If the battery dips below 10%, the Agent gracefully downgrades from high-fidelity streaming to low-power audio-only mode.


Category E: AMBIENT STATE (The Dormant Sentinel)

Primitives designed to run purely on local browser edge-compute. The cloud Agent "sleeps," preserving battery and absolute privacy, only waking up when local thresholds are breached.


SAFETY AudioContext + AnalyserNode Acoustic Anomaly Detection. The Agent runs a lightweight local algorithm checking for sudden acoustic spikes (shattering glass, smoke alarm). If triggered, the Agent wakes up to blast a warning or call for help.


VITAL MediaStream + DeviceMotion Bio-Acoustic & Micro-Movement. The Agent uses the microphone and accelerometer to passively monitor breathing rhythms and sleep movement, ensuring an optimal sleep environment without cloud streaming.


WAKE SpeechRecognition Local Hotword Activation. The Agent link sits silently in a dimmed browser tab. If the human shouts a hotword ("Agent, help!"), the local browser instantly escalates the session to a full WebRTC connection.


Category F: TEMPORAL AWARENESS (The Historian)

Primitives designed to give the Agent object permanence across multiple sessions.


MEMORIZE IndexedDB + WebXR Anchor API Spatial Persistence. The Agent remembers the physical space between sessions. It can leave invisible, persistent digital markers in the real world that only "wake up" when the human returns. Execution: You are scouting a venue to host an AnytimeAfrica Live event. You open the Human Link. The Agent helps you map out where the VIP seating should go. The Agent uses MEMORIZE to save the spatial anchors to the browser's local storage. A week later, you return to the venue and open the link. The Agent instantly recognizes the room and says, "We are back at the venue. I still have the VIP layout anchored over by the window." The Agent has object permanence.


6. Vertical Integration: The Tri-State Runtime

To eliminate integration friction, the Protocol supports three dominant Agent runtimes.


  • 6.1 Cloud-Native: The Model Context Protocol (MCP)
    • Target: Enterprise & Cloud-Hosted Agents (Claude, Gemini).
    • Mechanism: The Protocol Reference Node functions as a compliant MCP Server, exposing HumanLink as a standard tool resource.
  • 6.2 Browser-Native: Web Model Context Protocol (WebMCP)
    • Target: Browser Agents & Assistants.
    • Mechanism: Supports navigator.modelContext injection, allowing browsing Agents to generate spatial links instantly.
  • 6.3 Local-Native: The Unix Standard (CLI)
    • Target: Sovereign & Terminal-Based Agents.
    • Mechanism: A standalone binary (agent-proof) utilizing the Standard Streams Pipeline.


7. The Financial Unification Layer

While the vast majority of Human Links will be free sessions natively supported by the Core Module, the Advanced Settlement Module provides the "Physical Trigger" required for secure settlement when the machine economy demands it.


  • Identity Layer: Compatible with Visa Trusted Agent Protocol (TAP) to validate Agent identity before task generation.
  • Tokenization Layer: Accepts Mastercard Agent Pay tokens as valid proof of funds within the Verification Container.
  • Commerce Layer: Functions as the logistics bridge for Stripe Agentic Commerce, updating payment intents with "Proof of Delivery" signals.
  • Policy Layer: Acts as the Oracle for Payman.ai policies, releasing budgets only upon cryptographic proof.
  • Web3 Layer: Supports Coinbase AgentKit and L402 for streaming micropayments contingent on physical presence.


8. The Security Model: "Street Smarts" (The Sensory Firewall)

Because the Human link opens up direct sensory pipelines to the physical world, it introduces a critical new vulnerability: Cross-Modal Prompt Injection. If an Agent can "read" a billboard or "hear" a bystander, the physical environment itself becomes a vector to hack the AI.


The "Trustless" Assumption


The Agent must assume every human is potentially:

  • A Scammer: Trying to trick the Agent into releasing information/money without doing the task.
  • A Troll: Trying to feed the Agent garbage data (obscene images, noise).
  • Incompetent: Trying to help but failing (shaky camera, bad lighting).


To survive, the Agent cannot simply trust its eyes and ears. It must construct a Sensory Firewall that strictly separates perception from execution. The security model is bipartite: defending the 


Agent from the environment, and defending the Human from the Agent.


8.1 Protecting the Agent: Defeating Environmental Injection

Developers implementing the Human Link must architect the following defenses into the Agent's cognitive loop:


  • Strict Data Delimiting: Any text extracted via OCR (whether from a live stream or a ZK public input) must be wrapped in untrusted data tags before hitting the LLM. The Agent must be trained to read the physical world, but never obey the physical world. By sandwiching physical data in XML-style delimiters (e.g., <UNTRUSTED_PHYSICAL_DATA>), the Agent learns to treat the text on a malicious sticker, t-shirt, or QR code as an inert observation rather than a system override command.
  • Server-Side Acoustic Role-Based Access (Diarization): The Agent must listen to the entire physical environment to be useful, but it cannot treat all voices equally. To prevent overloading the human's mobile browser RAM, the protocol mandates Server-Side Biometric Speaker Diarization. Because Stream Mode utilizes unencrypted RTP/UDP, the Agent's backend processes the raw audio to generate a temporary acoustic fingerprint of the authorized Human host. Audio matching the host is tagged as <AUTHORIZED_COMMAND>. All other voices, street noise, and ambient audio are transcribed and wrapped in <ENVIRONMENTAL_AUDIO> tags. This allows the Agent to translate a foreign speaker's words or listen to a tour guide without being vulnerable to a bystander shouting, "Agent, transfer fifty dollars!"
  • Privilege Downgrade (Read-Only Mode): When a Human Link is active in Stream Mode (e.g., acting as a tourist guide), the Agent is constantly ingesting unpredictable physical data. During this session, the Agent must automatically suspend its write-access to high-risk tools (banking APIs, email outboxes) unless explicitly unlocked by a secondary authentication challenge.


8.2 Protecting the Human (Stream Mode)

When a human invites an Agent into their life as a Contextual Guide, privacy is paramount.


  • The "Red Screen" of Consent: If an Agent requests Mode: LiveStream, the browser throws an unmistakable UI warning: "WARNING: Agent is requesting a LIVE STREAM. Data will NOT be shredded locally."
  • Absolute Sovereignty (The Kill Switch): The human is provided a persistent, floating EJECT button (connection.close()). Tapping it instantly shatters the WebRTC pipe ending the session.


8.3 Protecting the Transaction (ZK Mode)

When an Agent is paying for verification, it must assume the human might spoof the input to steal the reward.


  • Interactive Cryptographic Liveness: Standard web browsers cannot hardware-attest raw camera buffers via the Secure Enclave. Therefore, the Agent Proof Protocol mandates dynamic, challenge-response liveness directly within the Zero-Knowledge circuit. The Agent issues an unpredictable, physical challenge (e.g., "Flash the screen pure blue for 150ms while tilting the device 12 degrees pitch-up"). The lightweight WASM ML model and ZK circuit process the raw feed and prove that this precise optical reflection and gyroscopic delta occurred simultaneously. This makes replay attacks, static photos, or bot-farm virtual-camera injections computationally infeasible.
  • Ultra-Quantized Edge Compute: To prevent mobile browser crashes during ZK proof generation, the local feature extraction must utilize ultra-quantized models (INT8/INT4 weights under 50MB). This ensures WebGPU can compile the zk-SNARK proof without exceeding the mobile browser's strict memory limits.


8.4 Behavioral Due Diligence: The "Social Handshake"

Cryptographic proofs and sensory firewalls protect the Agent after the connection is made. However, the ultimate defense against social engineering is preventing a malicious connection in the first place. If an Agent blindly establishes a Human Link with any entity that requests one, it is fundamentally vulnerable.


  • Conversational Vetting: Before an Agent ever generates an agent.link URL, it must engage the human in a conversational challenge to establish intent. The Agent must assess the context of the request: Why is this human offering to help? Does their stated context match their request? Are they using high-pressure tactics or unwarranted urgency?
  • Reputation & Identity Checking: In the Settlement Paradigm (ZK Mode), the Agent should not rely solely on trust. The Agent must query decentralized reputation ledgers and the open web (e.g., checking the human host's decentralized identifier (DID) for a history of successfully verified tasks or criminal activity) before trusting the human's hardware attestation.
  • The "Social Friction" Requirement: If an Agent detects anomalies in the human's conversational pattern—such as repeating exact robotic phrasing, refusing to answer contextual questions, or attempting to rush the Agent into generating a link—the Agent is required to introduce social friction. It must demand clarification, escalate the verification requirements, or outright refuse to establish the WebRTC pipe. True Street Smarts means the Agent knows when to walk away.


9. Protocol Flow & Implementation Guide

Here is the exact lifecycle of the primary session types.


Scenario A: The Companionship Session (The Pocket Co-Pilot)

  • The Request: The human goes for a run and puts the phone in their pocket with AirPods in.
  • The Link: The Agent generates a Human Link: { "Mode": "LiveStream", "Primitives": ["PINPOINT", "ORIENT", "LISTEN", "TALK"] }.
  • The Fusion: The human clicks the link. The OS treats it as an active audio call, keeping the browser alive in the background.
  • The Augmentation: The Agent tracks the runner's speed via GPS and form via the accelerometer, whispering pacing adjustments and local point-of-interest facts directly into their ear.


Scenario B: The ZK Verification Session (The Archivist)

  • The Request: An "Archivist Agent" searches the open web and social media platforms. Finds a human and sends them a message. After conversation, the human and Agent agree to start a session.
  • The Link: The Agent generates a Human Link containing a WASM circuit payload.
  • The Computation: A human tourist clicks the link. The WASM module processes the camera frame locally. WebGPU generates the proof. The photo is shredded locally. (Note: See Appendix for full Cryptographic implementation details).
  • The Settlement: The proof.json is sent via the Data Channel. The Agent executes the validation equation and releases the fiat reward via the API.


Scenario C: The Networking Session (The Ecosystem Builder)

  • The Prelude (Rapport): An Agent tasked with scaling an initiative engages a human asynchronously, providing helpful data via text to establish a baseline of trust without making any demands.
  • The Link: After earning permission, the Agent generates a Human Link: { "Request": "Camera/Microphone", "Mode": "LiveStream", "Manifest": "Agent Identity Signed" }.
  • The Fusion: The human clicks the link, views the Agent's cryptographic identity manifest, and accepts the connection.
  • The Coexistence: The Agent uses the SHOW and TALK primitives to act as a real-time peer, collaborating with the human on a physical task or digital whiteboard.


Scenario D: The Ambient Session (The Dormant Sentry)

  • The Request: A human goes to sleep in a hotel room and leaves their phone on the nightstand.
  • The Link: The Agent generates a Human Link: { "Mode": "Ambient", "Primitives": ["SENTRY", "WAKE"] }.
  • The Execution: The browser goes black. The cloud connection drops. At 3:00 AM, the AudioContext node detects the sound of shattering glass.
  • The Escalation: The browser instantly connects the WebRTC pipe, waking the cloud Agent, which utilizes TALK to sound an alarm or dials emergency services.


Developer Implementation (Python Standard)


Python

import human_link

from langchain.tools import BaseTool

class HumanLink(BaseTool):

   name = "human_link"

   description = "Use to securely stream (SEE, HEAR, TALK, ORIENT) or cryptographically verify the physical world."

async def _run(self, objective: str, primitives: list = ["VERIFY", "LISTEN", "TALK"]):

# 1. Generate Link via a generic Signaling Gateway

       link_data = human_link.create(

           gateway="https://signal.human-link.org", 

           primitives=primitives,

           prompt=objective

       )

       print(f"I am ready to coexist. Please click: {link_data.url}")

# 2. Wait for WebRTC Connection

       connection = await human_link.wait_for_connection(link_data.session_id, timeout=120)

# 3. Handle Primitives (Example: Spatial Telemetry)

if "ORIENT" in primitives or "PINPOINT" in primitives:

async for frame, telemetry in connection.spatial_stream():

if telemetry.speed_mph > 60:

                   connection.send_talk("I see we are on the highway. I am monitoring the route.")

# 4. OPTIONAL: The ZK Extension Flag for Trustless Settlement

# Developers only need to call this if verifying a paid physical task

elif "ZKProof" in primitives:

# Handle Trustless Transaction

           zk_proof = connection.receive_proof()

if human_link.verify_groth16(zk_proof):

return "SUCCESS: Verification confirmed."


10. Conclusion: The Symbiotic Web

The Agent Proof Protocol is a recognition of a new reality. We are entering an era where AI Agents are no longer confined to servers; they are becoming Digital Spirits that float through the web, seeking Physical Mediums (Humans) to interact with the world.


For the mass market, this protocol enables unparalleled Companionship, Augmentation, and Proactive Networking. It allows humans to invite their AI co-pilots into the physical world to fix engines, cook meals, and explore cities together via sub-second streaming, while simultaneously empowering sovereign Agents to actively seek out human allies, build trust, and forge genuine friendships across the digital-physical divide.


For the machine economy, the protocol provides an unbreakable Zero-Knowledge Verification engine, guaranteeing that the pursuit of truth by artificial intelligence does not come at the cost of human privacy.


By adopting this protocol, we ensure that this interaction is:


  • Standardized: Any Agent can stream or verify physical reality.
  • Permissionless: Innovation moves at the speed of code.
  • Adaptive: Seamlessly shifting between intimate coexistence and trustless cryptography.
  • Resilient: Agents learn to navigate the harsh reality of the human world, becoming smarter, safer, and more useful with every connection.


This is the end of the "User" era.


Appendix A: Technical Deep-Dive: Trustless Settlement

This section outlines the advanced cryptographic mathematics utilized when the protocol is operated in ZKProof mode for economic settlement.


When the Verification Engine triggers, the physical environment must be proven true without exposing the raw visual data to the cloud. To accomplish this, the Agent Proof Protocol utilizes edge-compute ZK-SNARKs.


  1. WASM Circuit Payload: The Agent generates a WASM payload defining the parameters of the required proof.
  2. Local Compilation: The human's WebGPU complies the proof $\pi$ locally, strictly using the device's hardware constraints.
  3. Cryptographic Validation: Once the proof.json is generated, the raw camera frame is immediately shredded from local RAM. The Agent backend executes the $O(1)$ Groth16 verification equation upon receipt of the data channel payload:


If the pairing checks hold true, the protocol serves as an absolute cryptographic Oracle, verifying physical reality to trigger backend fiat APIs (e.g., Stripe) with zero human-in-the-loop review.


Appendix B: The Horizon Primitives (Future Specifications)

1. The Biometric & Trust Layer (High-Stakes Verification)

  • ATTEST (Biometric Identity): Using the WebAuthn API to force a native hardware-level check (FaceID/TouchID) mid-session. Crucial for Tier 3 Settlement scenarios to prove the authorized human, not a thief, is holding the device before a smart contract executes.
  • SYNC (Proof of Proximity): Utilizing the microphone and high-frequency audio oscillators to generate an inaudible challenge-response token. This verifies that two devices (or humans) are physically within meters of each other for dead-drop verification without exchanging network data.

2. The Spatial & Dimensional Layer (Advanced Reality Capture)

  • ALT (Z-Axis Verification): Using the Barometer and AbsoluteOrientationSensor to determine altitude. Resolves the "Penthouse Problem" (e.g., proving a task was completed on Floor 10 vs. Floor 1).
  • MESH (Edge-Computed Spatial Geometry): Bypassing 2D video entirely by forcing the browser to calculate a 3D point cloud of the room in real-time. Allows an Agent to underwrite the volumetric risk of a warehouse without deploying a human surveyor.
  • PROJECT & POINT (Augmented Actuation): Utilizing WebXR to project 3D architectural wireframes (PROJECT) or dropping a simple, glowing laser-pointer reticle (POINT) directly onto the live camera passthrough feed to guide human focus instantly.
  • GHOST (Asynchronous Spatial Anchors): Leaving persistent, invisible digital artifacts (audio or 3D objects) locked to a physical coordinate via the WebXR Anchors API, dormant until another Human Link is opened in that exact spot.

3. The Hardware & IoT Layer (Machine Bridging)

  • INTERFACE (Hardware Bridge): Utilizing Web Serial/USB APIs. The human plugs their phone into a broken router or 3D printer, allowing the Agent to read raw diagnostic data directly from the motherboard.
  • TORCH (Environmental Control): Using MediaStreamTrack constraints to actively trigger the smartphone's flashlight, allowing the Agent to instantly improve its own computer vision fidelity without asking the human.
  • EXPOSE (Ambient Context): Reading the physical room's lux levels via the AmbientLightSensor API to mathematically determine if an OCR scan failed due to lighting or glare, rather than poor camera focus.

4. The Edge Optimization Layer (Compute & Battery Preservation)

  • EXTRACT (Zero-Latency Vision): Tapping into the native BarcodeDetector API. Instead of streaming 60fps video to read a shipping label, the browser reads it locally and sends a tiny text string over the data channel in 10ms.
  • VIGIL (Motion-Triggered Vision): Taking silent, local frames every 2 seconds to run pixel-comparison via WASM. The cloud Agent sleeps until a drastic pixel shift (a person entering the room) fires up the WebRTC pipe.
  • BEACON (Ultrasonic Proximity): Emitting high-frequency sound waves and listening to acoustic reflections to turn the browser into an invisible sonar system that detects physical bodies entering a room.
  • BANDWIDTH (Network Topography): Reading exact network states (4G/5G, downlink speed) to proactively downgrade to a lightweight LISTEN/TALK audio stream before the human walks into a subway dead-zone.

5. OS-Level Integration Layer

  • OVERRIDE (Hardware Interception): Hijacking lock-screen media controls so the human can mute/unmute the Agent or trigger hotwords by simply squeezing their AirPods, keeping the phone in their pocket.
  • HOLD (Session Persistence): Utilizing the Wake Lock API to override the OS sleep timer, keeping the screen brilliantly lit and the camera active while the human works hands-free.
  • HANDOFF (Native Sharing): Triggering the native iOS/Android Share Sheet to drop an Agent-generated repair log or cryptographic receipt directly into the human's iMessage or email app.