On May 13, 2026, at The Android Show: I/O Edition, Google announced the Android AppFunctions API — a new platform API that lets developers expose app capabilities as callable tools for Gemini Intelligence. The concept is direct: annotate a Kotlin function with @AppFunction, add KDoc describing what it does, and Android’s OS-level indexer makes it available to AI agents. No separate server infrastructure. No API keys. In Google’s own words, your app becomes an on-device MCP server.
This is the developer side of Gemini Intelligence. Consumer-facing features like Magic Pointer and AI-generated widgets get the headlines; the Android AppFunctions API is the mechanism that actually routes user requests through third-party apps. Apps that implement it get surfaced in Gemini’s automation pipeline. Apps that don’t risk being invisible to an interaction layer set to reach hundreds of millions of Android users starting this summer.
How the Android AppFunctions API Works
The implementation is intentionally minimal. Annotate an existing function with @AppFunction(isDescribedByKDoc = true), and the Jetpack library’s annotation processor generates an XML schema that Android OS indexes. When a user makes a natural language request, Gemini queries AppFunctionManager, selects the appropriate function across all installed apps, and executes it — no UI required.
@AppFunction(isDescribedByKDoc = true)
suspend fun createNote(
appFunctionContext: AppFunctionContext,
title: String,
content: String
): Note {
return noteRepository.createNote(title, content)
}
The KDoc description is the “prompt” for the AI. Write it precisely in natural language — that’s what Gemini uses to determine when to call this function. Developers already write KDoc for human readers; AppFunctions converts that documentation into AI-callable context with no extra abstraction layer. Testing is straightforward: adb shell cmd app_function list-app-functions confirms your functions are registered and indexed on-device. Full documentation is available in the official AppFunctions reference.
The Shift: From App-Centric to Agent-Orchestrated
The old Android model required users to know which app to open. They tapped an icon, navigated to the right screen, and performed an action. AppFunctions inverts this. The user states intent in natural language — “add a note about my dentist appointment” — and Gemini routes that to the right app’s function. The user never opens the app.
The discoverability implications are significant. Your app’s presence on a home screen matters less when Gemini becomes the primary interface. What matters is whether your AppFunctions are indexed and whether their KDoc descriptions accurately capture when and how they should be invoked. This is app SEO for the agent era: if you’re not in the index, you don’t get traffic. Cross-app workflows compound this further — Gemini can chain AppFunctions from multiple apps, meaning a single user request might pass through three apps the user never explicitly touched.
Related: Android 17 API 37: Breaking Changes Before the June Release
AppFunctions, MCP, and Who Can Call Your Functions
Google explicitly describes AppFunctions as “the mobile equivalent of tools in the Model Context Protocol.” The comparison holds: your app acts as an on-device MCP server, exposing a typed tool schema that agents can discover and call. However, unlike cloud MCP servers, AppFunctions run locally with direct access to app state — no network latency, no separate server infrastructure to maintain.
The permission model is worth understanding: any caller with EXECUTE_APP_FUNCTIONS permission can discover and invoke your functions — not exclusively Gemini. This means AppFunctions could eventually power custom enterprise agents, third-party Android assistants, or automation tools. Gemini Intelligence is the first major caller, but it won’t be the last. If you’ve been watching MCP server adoption on desktop, AppFunctions is the same pattern landing on mobile — except the OS itself is the orchestrator.
Related: MCP Has 1,800 Unauthenticated Servers in Production. Here’s Your Exposure.
What Developers Need to Do Before Google I/O 2026
Google I/O starts May 19 — two days away. AppFunctions sessions are confirmed on the agenda. The Gemini Intelligence EAP is open but competitive; participation determines which apps are in the first wave when Gemini Intelligence ships to Samsung Galaxy and Pixel phones this summer. Broader rollout — watches, Android Auto, Android XR glasses, Googlebooks laptops — follows later in 2026. The Android Developers Blog post from the Android Show has the full rollout timeline.
The implementation path is four steps: add the Jetpack AppFunctions dependency, annotate functions with @AppFunction(isDescribedByKDoc = true), verify registration using ADB, and submit the AppFunctions Early Access Program registration form. Requirements: Android 16+, target SDK 36+, compile SDK 37+. If your app targets a narrower API range, expanding it now puts you ahead of the summer deployment window.
Key Takeaways
- AppFunctions lets Android apps expose Kotlin functions as AI-callable tools — Gemini Intelligence discovers and executes them on the user’s behalf, with no app-opening required
- KDoc descriptions are the natural language “prompt” for the AI; precise KDoc directly affects how Gemini routes user requests to your app’s functions
- App discoverability is shifting from home screen icons to Gemini’s agent index — apps without AppFunctions lose a significant new interaction surface starting summer 2026
- AppFunctions is not Gemini-exclusive; any caller with
EXECUTE_APP_FUNCTIONSpermission can invoke your functions, making this a general on-device agent integration API - Register for the EAP now — Google I/O AppFunctions sessions start May 19, and the summer 2026 Gemini rollout to Galaxy and Pixel is the first major deployment window













