Inspiration

We realized that while the Logitech Actions Ring is powerful, it lacks custom actions for most apps, which is a massive friction point that breaks "flow state." We were inspired to build a truly universal interface that leverages the emerging Model Context Protocol (MCP) to instantly understand any application without manual setup, freeing users from remembering hundreds of different shortcuts.

What it does

Logitum transforms the Actions Ring into a dynamic, self-enhancingcontroller. Instead of static macros, we use an LLM to analyze the available MCP features of your active application, automatically selecting the top 8 most impactful actions. After the initialization, it starts to learn from your behavior, recording your actions into a semantic storage (translating raw events like "Alt+F4" into concepts like "Close App") to intelligently group and predict the exact tool you'll need next.

How we built it

We engineered a sophisticated plugin using Logitech Actions SDK which communicates with a custom .NET backend that ingests MCP data. We utilized an LLM as a CRON "semantic filter" to categorize potential actions, which are then stored in Sqlite3 along with their vector embeddings. A vector-similarity-based clustering algorithm runs over this data to identify and group similar user intents, ensuring the Action Ring is always populated with statistically relevant, context-aware options.

Challenges we ran into

Our most significant technical hurdle was implementing the dynamic real-time updating of the Actions Ring. The SDK ecosystem is often designed for defined profiles, so forcing the interface to "hot-swap" visual slices and action payloads based on asynchronous LLM and SQL queries required complex state management to ensure the UI rendered instantly.

Accomplishments that we're proud of

We significantly extended the scope of a standard "macro pad" by creating a universal abstraction layer. We are proud that Logitum doesn't need hardcoded profiles; if an app supports MCP, or if LLMs are aware of it, our system immediately "understands" it. Successfully implementing semantic SQL logging, where the system learns concepts rather than just keystrokes was a major breakthrough in making the device feel genuinely intelligent.

What we learned

We gained deep insight into the power of Model Context Protocol as a bridge between AI and UI, learning that the future of productivity isn't about faster input, but about "intent learning." We also got to see the intricacies of the Logitech Actions SDK, specifically how to push dynamic, data-driven payloads to hardware in real-time.

What's next for Logitum

We plan to optimize our SQL clustering algorithms to support multi-step agentic workflows (triggering a sequence of MCP actions with one click) and improve the UX and styling of the Action Ring.

Presentation

Link to the project presentation: https://docs.google.com/presentation/d/1bgZ21TdaCVUjY9BAymdR101OO0zo4oihyR5RAcpT5SI/edit?usp=sharing

Built With

Share this project:

Updates