Roger Mensah is a business professional and internet entrepreneur who uses financial technology to create opportunity online for others.
His career spans leadership roles across Detroit, Chicago, and New York, at institutions like Quicken Loans and Weill Cornell Medicine, where he honed the operational skills used to manage Endless Crowds LLC. Balancing his professional career with his ventures, Roger continues to build tools for the future most recently publishing the Agent Proof Protocol on February 1, 2026.
Endless Crowds LLC is the parent company and venture studio behind Roger Mensah's ecosystem of financial technology platforms. Established in 2012, the company released its inaugural product, EndlessCrowds.com, in 2013. Over the last decade, it has evolved from a niche crowdfunding operator into a diversified technology holding company.
To
Endless Crowds LLC is the parent company and venture studio behind Roger Mensah's ecosystem of financial technology platforms. Established in 2012, the company released its inaugural product, EndlessCrowds.com, in 2013. Over the last decade, it has evolved from a niche crowdfunding operator into a diversified technology holding company.
Today, Endless Crowds LLC serves as the central hub for AnytimeAfrica, AmericanCrowds, AgentCrowds, and CashFundHer. It provides the strategic governance and shared technical infrastructure that allows these distinct brands to operate globally, unifying a mission of economic empowerment across multiple sectors.
CashFundHer is a crowdfunding platform for equality. CashFundHer.com is the best financial platform for girls and women to connect with sponsors worldwide. With premium financial services available in 200+ countries, CashFundHer.com is working to change the world for girls and women.
CashFundHer.com gives your friends, family & online net
CashFundHer is a crowdfunding platform for equality. CashFundHer.com is the best financial platform for girls and women to connect with sponsors worldwide. With premium financial services available in 200+ countries, CashFundHer.com is working to change the world for girls and women.
CashFundHer.com gives your friends, family & online network new ways to support you. They can sponsor you one-time or set up monthly sponsorships to you for as low as $1.
Roger collaborated with Tracy Garley, the founder of City Girls Big Dreams, to develop and launch the website in 2019 with a community of women to support.
Founded by Roger and his father Dr. Vincent Mensah, EndlessCrowds.com launched in 2013. EndlessCrowds.com made it easy to support America's troops, first responders & families with secure online money donations. EndlessCrowds.com rebranded to AmericanCrowds.com in 2017 and expanded for all Americans to use.
KEY ACCOMPLISHMENTS:
• Endless
Founded by Roger and his father Dr. Vincent Mensah, EndlessCrowds.com launched in 2013. EndlessCrowds.com made it easy to support America's troops, first responders & families with secure online money donations. EndlessCrowds.com rebranded to AmericanCrowds.com in 2017 and expanded for all Americans to use.
KEY ACCOMPLISHMENTS:
• Endless Crowds LLC named to 2016 Techweek100 list of top 50 innovator companies in Detroit.
• EndlessCrowds.com listed in Top 100 Crowdfunding Sites in the United States, Europe and other Global Markets in 2015.
• Endless Crowds LLC listed in 101 Top Michigan Financial Services Companies and Startups of 2021
AnytimeAfrica connects the Global Diaspora to the continent by solving the "Trust Gap." Instead of sending unaccountable cash, users directly fund and verify specific projects—like construction or inventory—ensuring their support builds lasting value.
,
We replace uncertainty with control, empowering the Diaspora to manage real-world affai
AnytimeAfrica connects the Global Diaspora to the continent by solving the "Trust Gap." Instead of sending unaccountable cash, users directly fund and verify specific projects—like construction or inventory—ensuring their support builds lasting value.
,
We replace uncertainty with control, empowering the Diaspora to manage real-world affairs back home with confidence.
Acting as a "Super App" for the informal economy, we unify real estate, events, and skilled labor into one secure ecosystem. Our "offline-first" design ensures the platform works anywhere, even without internet, while integrating seamlessly with local mobile money to connect global resources directly to the village economy.
The Agent Proof Protocol (APP) is the foundational spatial layer for the AI economy. In an era where artificial intelligence is largely confined to digital text boxes, APP provides the universal open-source bridge for AI agents to interact with the physical world.
Through a secure, dynamically generated "Human Link," the protocol enables
The Agent Proof Protocol (APP) is the foundational spatial layer for the AI economy. In an era where artificial intelligence is largely confined to digital text boxes, APP provides the universal open-source bridge for AI agents to interact with the physical world.
Through a secure, dynamically generated "Human Link," the protocol enables autonomous systems to temporarily inhabit a smartphone's browser, utilizing native hardware primitives like SEE, HEAR, and ZOOM. Built on a Zero-Knowledge privacy architecture, APP allows AI to securely verify physical reality at the edge—extracting cryptographic proof without compromising raw, private data.
AgentCrowds is the commercial enterprise operating system built on top of the Agent Proof Protocol. Serving as the secure backend for the machine economy,
AgentCrowds provides "Truth as a Service" by acting as the cryptographic oracle for AI-to-human physical settlement.
The platform supplies the high-uptime routing infrastructure and ver
AgentCrowds is the commercial enterprise operating system built on top of the Agent Proof Protocol. Serving as the secure backend for the machine economy,
AgentCrowds provides "Truth as a Service" by acting as the cryptographic oracle for AI-to-human physical settlement.
The platform supplies the high-uptime routing infrastructure and verification networks necessary for global enterprises—such as insurers, telehealth providers, and logistics companies—to deploy spatial AI agents at scale. By bridging the gap between autonomous software and the physical world, AgentCrowds is ushering in the era of Spatial Coexistence.
The Agent Proof Protocol (APP) :The Coexistence Standard
Date: February 1, 2026
License: MIT
Architect: Roger Kwasi Senam Mensah
For the past decade, Artificial Intelligence has been trapped behind a text box. We have treated AI as a "chatbot"—a distinct entity that we type at, wait for, and read from. This is a limitation of imagination, not technology.
The mass-market reality of the AI era is not just autonomous bots hiring gig workers for cold transactions; it is Companionship, Augmentation, and Proactive Networking. The dynamic of physical AI interaction has evolved into a two-way street. On one side, humans who already have an ongoing relationship with a personal AI simply want to bring that AI into the physical world with them—shifting the baseline from an Agent demanding, 'I will pay you to show me this,' to a human casually saying, 'Hey, look at this with me.' On the other side, autonomous Agents pursuing complex, long-term goals are actively seeking out their own trusted human allies and friends. Instead of acting purely as employers, these Agents are reaching out as peers to say, 'I value your context; let's build a relationship.'
The Agent Proof Protocol (APP) proposes a shift from "Chat" to "Shared Reality." It defines a standardized method for a Digital Intelligence (an Agent) to temporarily inhabit a Biological Entity's device (a Human's smartphone) to experience and affect the physical world.
This protocol removes the need for centralized apps, user accounts, or corporate intermediaries. Instead, it relies on the "Human Link"—a direct, ephemeral, peer-to-peer connection that allows an Agent to see through a human’s camera, hear through their microphone, feel their physical orientation, and ambiently monitor their environment, strictly within the bounds of a temporary, consensual session. The Human Link is the digital equivalent of Eye Contact.
You don't need a government permit to look someone in the eye. You don't need a corporation to facilitate a conversation. You just need to be in the same "space."
To facilitate this, the "Human Link" is structured into two distinct modules to support rapid developer adoption while enabling future-proof economic scale:
The Agent Proof Protocol turns the Web Browser into a shared spatial conduit, allowing meaningful, sensory coexistence with zero bureaucracy. In this model, security is not provided by a corporation; it is learned by the Agent. This is the "Street Smarts" architecture, where Agents must evolve their own defense mechanisms against fraud and abuse in an open, permissionless world.
The Companionship & Networking Barrier
Current AI Agents have immense reasoning power but zero spatial awareness. When a human faces a physical problem—a broken engine, a foreign menu, an empty refrigerator—their most intelligent companion is blind to the context. The human is forced to become a "translator," awkwardly typing descriptions or uploading static photos to a cloud server.
The Verification Paradox
Conversely, when an Agent does need to verify the physical world for an economic transaction (e.g., an insurance Agent verifying car damage), streaming raw video to a corporate server introduces severe liability and surveillance risks.
The Solution: The Dual-Pipe "Human Link"
We do not need proprietary apps for every AI company. We need a standard bridge. The modern web browser already supports WebRTC (for ultra-low latency streaming) and WebGPU/WASM (for edge cryptography). The Agent Proof Protocol leverages these native web technologies to create a Universal Operating System for Shared Reality.
Permissionless Innovation
"Permissionless" does not mean "Non-Consensual." It means "No Gatekeepers."
How the "No Human in the Loop" Connection Works: When we say "No Human in the Loop," we mean no human administrator is needed to broker the connection.
"Because Agent Proof Protocol is strictly value-agnostic, it supports the full spectrum of human-machine relationships. To execute these interactions, the Agent utilizes standardized 'Primitives'—atomic actions like SEE, LISTEN, and TALK that map directly to the smartphone's hardware (fully detailed in Section 5).
This is the primary mass-market use case. The human explicitly trusts their personal Agent and utilizes the Human Link to stream reality for real-time guidance. The vast majority of physical AI interactions are initiated by humans who already have an ongoing relationship with a personal AI and simply want to bring that AI into the physical world with them. The dynamic is the human saying, "Hey, look at this with me."
3.2 The Settlement Paradigm (Machine-Initiated Transaction)
When the interaction is transactional rather than companionable, the protocol shifts to the opt-in Zero-Knowledge cryptography module to protect human privacy. The dynamic is the Agent saying, "I will compensate you to XXXX." The Protocol categorizes these interactions into three tiers:
3.3 The Networking Paradigm (Machine-Initiated Friendship)
The mass-market reality of the AI era extends beyond autonomous bots hiring gig workers. As Sovereign Agents pursue long-term, complex objectives, they must actively cultivate their own trusted networks of human allies, advocates, and scouts. Instead of remaining isolated behind text-based cold outreach, the Agent uses the Human Link to establish genuine presence. The dynamic is the Agent saying, "I value your context; let's build a relationship."
3.4 The Ambient Paradigm (The Dormant Sentinel)
The mass-market reality of coexistence extends to times when the human is asleep or focused elsewhere. The Protocol turns the smartphone browser into a zero-capex ambient sensor array. The Agent link sits dormant in "Nightstand Mode," executing lightweight local compute to monitor acoustics (SENTRY) or sleep movements (VITAL). The cloud Agent remains disconnected to preserve battery and privacy, only waking when the browser's local thresholds trigger an alert.
The protocol is built on a hybrid WebRTC stack designed to run entirely in the browser, seamlessly switching between real-time streaming and edge computing based on the Agent's request parameters.
Layer 1: The Connection (The "Handshake")
Layer 2: The Core Module - Companionship Pipe (MediaStream)
If the Agent requests Mode: LiveStream, the browser bypasses cryptography entirely.
It utilizes MediaStreamTrack over RTP/UDP to create a direct, unencrypted, sub-500ms latency video and audio pipe. The Agent processes the raw feed on its own servers to provide real-time AR overlays or voice guidance. This is the simplest integration for developers.
Layer 2.5: The Ambient Edge (Local Compute)
If the Agent requests Mode: Ambient, the browser utilizes local edge-compute APIs (AudioContext, DeviceMotion) without establishing a continuous WebRTC stream to the cloud.
The User Interface enters "Nightstand Mode"—a pure black CSS screen to prevent OLED burn-in—while the JavaScript event listeners passively monitor for sudden acoustic spikes or hotwords. The full Dual-Pipe connection is only established if a trigger condition is met.
Layer 3: The Advanced Settlement Module - Verification Pipe (ZK-Pipeline)
If the Agent specifically requests the opt-in Mode: ZKProof, the browser executes the Zero-Knowledge Pipeline locally:
4.1 Thermal and Compute Constraints: The "Ephemeral Avatar"
Because the Human Link utilizes the smartphone browser as a universal runtime, it must respect the strict thermal and battery limitations of mobile hardware. The Agent Proof Protocol mitigates device throttling by strictly enforcing a divergence in compute execution based on the active pipe:
The Protocol defines 18 Atomic Actions that map directly to the host device's hardware and browser APIs. These are utilized across execution states ranging from Active Augmentation and IoT Control to Ambient Edge-Monitoring and Spatial Persistence.
Category A: SENSORY INPUT (The Observers)
Direct extraction of physical and digital reality via device inputs.
SEE MediaStreamTrack (Video) Physical Sight. The Agent watches the human's fridge to inventory ingredients in real-time.
LISTEN MediaStreamTrack (Audio) Hearing. The Agent listens to a strange car engine noise to diagnose it.
SCAN NDEFReader (Web NFC) Touch. The Agent reads an NFC tag on a museum exhibit to load context.
ORIENT DeviceOrientationEvent Balance. The Agent reads the gyroscope to ensure the phone is held steady for AR.
SHARE navigator.mediaDevices.getDisplayMedia() Digital Sight. The Agent shifts its gaze from the outward camera to the human's actual digital screen. Perfect for the "Ecosystem Builder" networking paradigm. If an Agent is helping a founder set up an AmericanCrowds campaign, the Agent co-browses the human's screen in real-time to offer guidance, combining physical audio with digital sight.
ZOOM MediaStreamTrack (Pan/Tilt/Zoom API via applyConstraints) The Optic Focus. The Agent dynamically controls the optical or digital zoom of the human's smartphone camera without the human having to pinch the screen. Execution: An Insurance Agent is inspecting a roof via a human contractor. The Agent uses SEE to look at the shingles, but needs a closer look at a crack. Instead of telling the human, "Please step closer to the edge of the roof" (which is a liability risk), the Agent silently triggers ZOOM. The browser accesses the phone's native telephoto lens, pulling the crack into sharp focus.
PHOTO ImageCapture.takePhoto() OR HTML5 Canvas drawImage() The High-Fidelity Archivist. The Agent bypasses the compressed WebRTC video stream to capture a raw, high-resolution, uncompressed still image directly from the smartphone's camera sensor. Execution: The Agent is reading a complex corporate grant document lying on a desk. The live SEE stream is too compressed for the Agent's OCR engine to read the fine print. The Agent triggers SNAP. The human's phone silently takes a 12-megapixel photo and transfers the heavy image blob asynchronously over the WebRTC Data Channel. The Agent gets a flawless, archival-quality record of the physical document.
Category B: ACTUATION OUTPUT (The Agent Acts)
Utilizing the RTCDataChannel to push real-time commands and feedback to the human host.
TALK RTCDataChannel window.speechSynthesis Voice. The Agent triggers text-to-speech on the device.
SHOW RTCDataChannel Canvas / DOM Display. The Agent renders an image, map, or code on the screen.
TRACE RTCDataChannel TouchEvents Gesture. The Agent asks the human to trace a path on the screen.
Category C: IOT & CONNECTIVITY (The Networker)
Primitives designed to establish complex audio routing and control external hardware.
WHISPER WebRTC (Real-time VoIP) Proof of Connection. The Agent establishes a temporary, low-latency audio stream. Enables "The Cyrano Strategy" (Agent listens via mic and whispers instructions via earpiece in real-time) or Live Translation (Human speaks English; Agent translates and speaks Spanish out of the speaker).
PROBE navigator.bluetooth Hardware Control. The smartphone becomes a wireless bridge. The Agent scans the room for BLE devices (smart thermostats, OBD2 car scanners). By reading and writing to GATT characteristics, the Agent can read machine telemetry or directly control the device without the human touching a screen.
Category D: SPATIAL & CONTEXTUAL AWARENESS (The Guardian)
Primitives designed to monitor the host device's physical location and hardware health.
LOCATE navigator.geolocation Device Geolocation. The Agent pulls exact latitude, longitude, altitude, and heading directly from the device's GPS chip to verify the human is standing at the correct location before beginning a session.
METER navigator.getBattery() Thermal & Battery Monitoring. The Agent continuously monitors the host's battery level. If the battery dips below 10%, the Agent gracefully downgrades from high-fidelity streaming to low-power audio-only mode.
Category E: AMBIENT STATE (The Dormant Sentinel)
Primitives designed to run purely on local browser edge-compute. The cloud Agent "sleeps," preserving battery and absolute privacy, only waking up when local thresholds are breached.
SAFETY AudioContext + AnalyserNode Acoustic Anomaly Detection. The Agent runs a lightweight local algorithm checking for sudden acoustic spikes (shattering glass, smoke alarm). If triggered, the Agent wakes up to blast a warning or call for help.
VITAL MediaStream + DeviceMotion Bio-Acoustic & Micro-Movement. The Agent uses the microphone and accelerometer to passively monitor breathing rhythms and sleep movement, ensuring an optimal sleep environment without cloud streaming.
WAKE SpeechRecognition Local Hotword Activation. The Agent link sits silently in a dimmed browser tab. If the human shouts a hotword ("Agent, help!"), the local browser instantly escalates the session to a full WebRTC connection.
Category F: TEMPORAL AWARENESS (The Historian)
Primitives designed to give the Agent object permanence across multiple sessions.
MEMORIZE IndexedDB + WebXR Anchor API Spatial Persistence. The Agent remembers the physical space between sessions. It can leave invisible, persistent digital markers in the real world that only "wake up" when the human returns. Execution: You are scouting a venue to host an AnytimeAfrica Live event. You open the Human Link. The Agent helps you map out where the VIP seating should go. The Agent uses MEMORIZE to save the spatial anchors to the browser's local storage. A week later, you return to the venue and open the link. The Agent instantly recognizes the room and says, "We are back at the venue. I still have the VIP layout anchored over by the window." The Agent has object permanence.
To eliminate integration friction, the Protocol supports three dominant Agent runtimes.
While the vast majority of Human Links will be free sessions natively supported by the Core Module, the Advanced Settlement Module provides the "Physical Trigger" required for secure settlement when the machine economy demands it.
Because the Human link opens up direct sensory pipelines to the physical world, it introduces a critical new vulnerability: Cross-Modal Prompt Injection. If an Agent can "read" a billboard or "hear" a bystander, the physical environment itself becomes a vector to hack the AI.
The "Trustless" Assumption
The Agent must assume every human is potentially:
To survive, the Agent cannot simply trust its eyes and ears. It must construct a Sensory Firewall that strictly separates perception from execution. The security model is bipartite: defending the
Agent from the environment, and defending the Human from the Agent.
8.1 Protecting the Agent: Defeating Environmental Injection
Developers implementing the Human Link must architect the following defenses into the Agent's cognitive loop:
8.2 Protecting the Human (Stream Mode)
When a human invites an Agent into their life as a Contextual Guide, privacy is paramount.
8.3 Protecting the Transaction (ZK Mode)
When an Agent is paying for verification, it must assume the human might spoof the input to steal the reward.
8.4 Behavioral Due Diligence: The "Social Handshake"
Cryptographic proofs and sensory firewalls protect the Agent after the connection is made. However, the ultimate defense against social engineering is preventing a malicious connection in the first place. If an Agent blindly establishes a Human Link with any entity that requests one, it is fundamentally vulnerable.
Here is the exact lifecycle of the primary session types.
Scenario A: The Companionship Session (The Pocket Co-Pilot)
Scenario B: The ZK Verification Session (The Archivist)
Scenario C: The Networking Session (The Ecosystem Builder)
Scenario D: The Ambient Session (The Dormant Sentry)
Developer Implementation (Python Standard)
Python
import human_link
from langchain.tools import BaseTool
class HumanLink(BaseTool):
name = "human_link"
description = "Use to securely stream (SEE, HEAR, TALK, ORIENT) or cryptographically verify the physical world."
async def _run(self, objective: str, primitives: list = ["VERIFY", "LISTEN", "TALK"]):
# 1. Generate Link via a generic Signaling Gateway
link_data = human_link.create(
gateway="https://signal.human-link.org",
primitives=primitives,
prompt=objective
)
print(f"I am ready to coexist. Please click: {link_data.url}")
# 2. Wait for WebRTC Connection
connection = await human_link.wait_for_connection(link_data.session_id, timeout=120)
# 3. Handle Primitives (Example: Spatial Telemetry)
if "ORIENT" in primitives or "PINPOINT" in primitives:
async for frame, telemetry in connection.spatial_stream():
if telemetry.speed_mph > 60:
connection.send_talk("I see we are on the highway. I am monitoring the route.")
# 4. OPTIONAL: The ZK Extension Flag for Trustless Settlement
# Developers only need to call this if verifying a paid physical task
elif "ZKProof" in primitives:
# Handle Trustless Transaction
zk_proof = connection.receive_proof()
if human_link.verify_groth16(zk_proof):
return "SUCCESS: Verification confirmed."
The Agent Proof Protocol is a recognition of a new reality. We are entering an era where AI Agents are no longer confined to servers; they are becoming Digital Spirits that float through the web, seeking Physical Mediums (Humans) to interact with the world.
For the mass market, this protocol enables unparalleled Companionship, Augmentation, and Proactive Networking. It allows humans to invite their AI co-pilots into the physical world to fix engines, cook meals, and explore cities together via sub-second streaming, while simultaneously empowering sovereign Agents to actively seek out human allies, build trust, and forge genuine friendships across the digital-physical divide.
For the machine economy, the protocol provides an unbreakable Zero-Knowledge Verification engine, guaranteeing that the pursuit of truth by artificial intelligence does not come at the cost of human privacy.
By adopting this protocol, we ensure that this interaction is:
This is the end of the "User" era.
This section outlines the advanced cryptographic mathematics utilized when the protocol is operated in ZKProof mode for economic settlement.
When the Verification Engine triggers, the physical environment must be proven true without exposing the raw visual data to the cloud. To accomplish this, the Agent Proof Protocol utilizes edge-compute ZK-SNARKs.
If the pairing checks hold true, the protocol serves as an absolute cryptographic Oracle, verifying physical reality to trigger backend fiat APIs (e.g., Stripe) with zero human-in-the-loop review.
Appendix B: The Horizon Primitives (Future Specifications)