Large #Enterprise #Schemas = Large Confusion Even #GraphRAG needs boundaries to keep #LLMs focused. Here’s how to fix it. Use #Memgraph’s Fine-Grained #AccessControls. They make GraphRAG more accurate and explainable at enterprise scale. Josip Mrđen explains how this is so 👉 https://lnkd.in/gsiGAQr6 #ContextEngineering
How to avoid confusion with large enterprise schemas using Memgraph's access controls.
More Relevant Posts
-
🚀 𝗝𝘂𝘀𝘁 𝗕𝘂𝗶𝗹𝘁 𝗠𝘆 𝗙𝗶𝗿𝘀𝘁 𝗠𝗖𝗣 𝗦𝗲𝗿𝘃𝗲𝗿 – 𝗔𝗻𝗱 𝗜𝘁'𝘀 𝗮 𝗚𝗮𝗺𝗲 𝗖𝗵𝗮𝗻𝗴𝗲𝗿! I'm thrilled to share that I've successfully created my first Model Context Protocol (MCP) server – an Expense Tracker – and integrated it with Claude Desktop. This journey has completely changed how I think about AI integrations! 🤔 𝗪𝗵𝗮𝘁 𝗶𝘀 𝗠𝗖𝗣? MCP (Model Context Protocol) is Anthropic's open standard that allows AI assistants like Claude to securely connect with external data sources and tools in real-time. Think of it as giving Claude direct access to your applications, databases, and systems. 🏗️ 𝗠𝗖𝗣 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 (𝗧𝗵𝗲 𝗧𝗵𝗿𝗲𝗲 𝗞𝗲𝘆 𝗖𝗼𝗺𝗽𝗼𝗻𝗲𝗻𝘁𝘀): 1️⃣ 𝗠𝗖𝗣 𝗛𝗼𝘀𝘁𝘀 – Applications like Claude Desktop that want AI capabilities 2️⃣ 𝗠𝗖𝗣 𝗖𝗹𝗶𝗲𝗻𝘁𝘀 – Protocol implementations that maintain server connections 3️⃣ 𝗠𝗖𝗣 𝗦𝗲𝗿𝘃𝗲𝗿𝘀 – Your custom services (like my Expense Tracker) that expose data and functionality This three-tier architecture creates a secure, standardized way for AI to interact with any system! ⚡ 𝗠𝗖𝗣 𝘃𝘀 𝗧𝗿𝗮𝗱𝗶𝘁𝗶𝗼𝗻𝗮𝗹 𝗔𝗣𝗜𝘀 – 𝗪𝗵𝘆 𝗧𝗵𝗶𝘀 𝗠𝗮𝘁𝘁𝗲𝗿𝘀: 𝗧𝗿𝗮𝗱𝗶𝘁𝗶𝗼𝗻𝗮𝗹 𝗔𝗣𝗜𝘀: ❌ You send requests, wait for responses ❌ AI must parse documentation and guess endpoints ❌ Limited context about your data structure ❌ Each integration needs custom code 𝗠𝗖𝗣: ✅ AI has persistent connections to your tools ✅ Self-describing – AI understands capabilities automatically ✅ Contextual awareness – AI knows your data schema and relationships ✅ Standardized protocol – one integration pattern for everything ✅ Bidirectional – AI can both read and write data naturally 💡 𝗠𝘆 𝗘𝘅𝗽𝗲𝗿𝗶𝗲𝗻𝗰𝗲: Building this expense tracker opened my eyes to what's possible: ✅ Natural Conversations – No more rigid commands. I just tell Claude "Add my coffee expense" and it happens ✅ Real Database Integration – Connected with PostgreSQL via Docker, using SQLAlchemy for async operations ✅ Production-Ready Stack – Built with FastMCP, Docker and proper data validation ✅ Powerful Features – User management, expense tracking, category summaries, date-range filtering—all through conversation 🎯 𝗧𝗵𝗲 "𝗔𝗵𝗮!" 𝗠𝗼𝗺𝗲𝗻𝘁 : but to be clear, 𝘁𝗵𝗶𝘀 𝗶𝘀 𝗺𝘆 '𝗔𝗵𝗮!' 𝗠𝗼𝗺𝗲𝗻𝘁, 𝗻𝗼𝘁 𝘁𝗵𝗲 𝗗𝗲𝗲𝗽𝗦𝗲𝗲𝗸-𝗥𝟭-𝗭𝗲𝗿𝗼 '𝗔𝗵𝗮!' 𝗠𝗼𝗺𝗲𝗻𝘁."😁 Instead of building yet another CRUD API and then figuring out how to explain it to an AI, MCP lets Claude natively understand my expense tracker. It's not calling APIs—it's directly integrated. The AI becomes part of my application architecture. 🔮 𝗪𝗵𝗮𝘁'𝘀 𝗡𝗲𝘅𝘁: This is just the beginning! I'm exploring: Building MCP servers for productivity tools Connecting multiple data sources Creating agentic workflows that span systems 📂 Repo: https://lnkd.in/eTu8Gbfc #MCP #ModelContextProtocol #AI #Claude #Anthropic MachineLearning #OpenSource #SoftwareEngineering #API #AIAgents #Automation
To view or add a comment, sign in
-
🚀 𝗔 𝘀𝗺𝗮𝗹𝗹 𝗲𝘅𝗽𝗲𝗿𝗶𝗺𝗲𝗻𝘁 𝘁𝗵𝗮𝘁 𝗳𝗲𝗹𝘁 𝗹𝗶𝗸𝗲 𝗮 𝗯𝗶𝗴 𝘀𝗵𝗶𝗳𝘁 — 𝗠𝗖𝗣 𝗦𝗲𝗿𝘃𝗲𝗿 + 𝗟𝗟𝗠 𝗳𝗼𝗿 𝗰𝗼𝗻𝘃𝗲𝗿𝘀𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗶𝗻𝘀𝗶𝗴𝗵𝘁𝘀 I tried something simple: I connected the 𝗠𝗦𝗦𝗤𝗟 𝗠𝗖𝗣 𝗦𝗲𝗿𝘃𝗲𝗿 to 𝗖𝗹𝗮𝘂𝗱𝗲 𝗗𝗲𝘀𝗸𝘁𝗼𝗽 & 𝗩𝗦 𝗖𝗼𝗱𝗲 𝗖𝗼𝗽𝗶𝗹𝗼𝘁 𝗖𝗵𝗮𝘁, pointed it to my local 𝗔𝗱𝘃𝗲𝗻𝘁𝘂𝗿𝗲𝗪𝗼𝗿𝗸𝘀𝗗𝗪 database… and instead of writing a single SQL query or designing a dashboard, I 𝗷𝘂𝘀𝘁 𝘀𝘁𝗮𝗿𝘁𝗲𝗱 𝗮𝘀𝗸𝗶𝗻𝗴 𝗾𝘂𝗲𝘀𝘁𝗶𝗼𝗻𝘀 𝗶𝗻 𝗽𝗹𝗮𝗶𝗻 𝗹𝗮𝗻𝗴𝘂𝗮𝗴𝗲. Something interesting happened. • Instead of giving raw tables, it 𝘀𝘂𝗺𝗺𝗮𝗿𝗶𝘀𝗲𝗱 𝗹𝗶𝗸𝗲 𝗮 𝗯𝘂𝘀𝗶𝗻𝗲𝘀𝘀 𝗿𝗲𝗽𝗼𝗿𝘁 • It didn’t just return numbers — it 𝗵𝗶𝗴𝗵𝗹𝗶𝗴𝗵𝘁𝗲𝗱 𝗶𝗻𝘀𝗶𝗴𝗵𝘁𝘀 like revenue drop patterns, strong markets, attach-rate potential, VIP customers, seasonality curves… • And the best part — I didn’t shift between tools. The conversation itself became the analytics workspace. 𝗧𝗵𝗶𝘀 𝗠𝗖𝗣 + 𝗟𝗟𝗠 𝗰𝗼𝗺𝗯𝗼 𝗳𝗲𝗲𝗹𝘀 𝗱𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝘁 𝗯𝗲𝗰𝗮𝘂𝘀𝗲: • No manual querying, no drag-drop visuals — just ask and go deeper with follow-up questions • SQL becomes just one tool behind the server — the LLM handles the thinking and narration • It’s not limited to “show me the data” — it naturally flows into “tell me what matters here” • And because it's MCP, I can swap models later (Claude, GPT, Copilot Chat…) without rebuilding anything 𝗧𝗵𝗲 𝗿𝗲𝗮𝗹 𝘃𝗮𝗹𝘂𝗲 It didn’t feel like working with a database. It felt like having a business conversation with my data. Not dashboards. Not queries. Just insight-ready conversations. I'm calling this — for myself at least — a shift from 𝗗𝗮𝘁𝗮 𝗔𝗻𝗮𝗹𝘆𝘁𝗶𝗰𝘀 → 𝗖𝗼𝗻𝘃𝗲𝗿𝘀𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗔𝗻𝗮𝗹𝘆𝘁𝗶𝗰𝘀+𝗜𝗻𝘀𝗶𝗴𝗵𝘁𝘀. FYI, the entire PDF was generated by LLM+MCP Server with just 1 prompt, added only 5 pages here.. Check out this Github Repo for more MCP Servers: https://lnkd.in/g_YkScW2 Thanks Pawel Potasinski for your post https://lnkd.in/gJPKrgjB #MCP #LLM #Claude #VSCopilotChat #SQL #ConversationalAnalytics #DataInsights #AzureData #AIEngineering #DataTeams #AdventureWorksDW #NextGenBI
To view or add a comment, sign in
-
How a Microsoft Access Database Should Work (By Design) A modern Access database should behave more like a modular operating system than a spreadsheet. Each class module represents a living object — Projects, Work Packages, or Value Streams — all governed by PDCA (Plan-Do-Check-Act). The TreeView isn’t just navigation; it’s the system’s brain — connecting every table, class, and test through a plugin architecture that allows dynamic UI generation and JSON-based caching for instant load times. True power comes when we apply Knowledge Management directly to Work Package objects, using the rich relational framework inside Access. By leveraging one-to-many, many-to-many, and cross-domain relationships, each Work Package becomes a node in an enterprise knowledge graph — linking tasks, documents, people, and AI insights together in real time. Forms, controls, and tables should never be hard-wired. They should build themselves from class definitions, validated by AI auditors, and tested through automated TDD modules. Access, when engineered correctly, isn’t legacy software — it’s a low-code engine capable of managing projects, people, and knowledge with transparency, structure, and speed. #AccessDatabase #KnowledgeManagement #PDCA #LeanOfficePro #Master2025 #LowCode #Automation #AIIntegration #ValueStreamMapping
To view or add a comment, sign in
-
Hello Connections, Traditional Retrieval-Augmented Generation (#RAG) systems are powerful, but they often miss one key element — relationships. They retrieve similar chunks, not connected knowledge. That’s where #Graph RAG steps in — combining the semantic understanding of #LLMs with the relational intelligence of knowledge graphs. In my latest #Medium article, I explore how to integrate #Neo4j into the RAG architecture and design an effective #knowledgegraph to enhance retrieval and reasoning capabilities. Read the full article here: https://lnkd.in/gAkm6PTm
To view or add a comment, sign in
-
Did data cleaning? But ever thought how this process is carried out in big tech industries? Let me show you how it’s done at an enterprise scale. 🧹 Most people think “data cleaning” means dropping nulls or removing duplicates. But in real-world data engineering and data science workflows, cleaning is a multi-stage pipeline — automated, monitored, and modular. Here’s a quick look at how I built a production-grade Data Cleaning Pipeline (using pandas for demo clarity — in production, this would scale via Polars or PySpark): 🚀 Pipeline Highlights: Automated Data Profiling – Detects missing values, outliers, correlations, and duplicate signatures. Schema Validation & Type Enforcement – Ensures every column follows strict data type and domain rules. Fuzzy Duplicate Detection – Uses text similarity measures (Levenshtein, cosine similarity, etc.) to catch near-duplicates missed by traditional checks. Dynamic Cleaning Rules – Handles context-based corrections (like inconsistent date formats, trimming spaces, or standardizing categorical labels). Outlier Detection – Employs IQR & Z-score–based thresholds for numeric anomalies. Correlation Profiling – Profiles redundant or collinear features before model ingestion. Logging & Audit Trails – Every transformation step is tracked for reproducibility. Scalable Architecture – Designed to plug into ETL workflows and scale to millions of rows. 🧠 The goal? To build a data cleaning layer that’s trustworthy, reproducible, and industry-ready — turning raw, messy data into clean, analysis-ready datasets at scale. 📊 This pipeline is part of my ongoing Enterprise Data Cleaning Project, where I’m simulating how large organizations maintain data quality before analytics or machine learning. Link to Github project: https://lnkd.in/dGhqvVed
To view or add a comment, sign in
-
Agentic RAG + MCP: A Practical Blueprint for Modular, Compliant Retrieval Building RAG systems that pull from many sources usually implies some level of agency. especially when choosing which source to query. Here’s a clean way to evolve that pattern using Model Context Protocol (MCP) while keeping the system modular and compliant. 1) Understand & refine the query Route the user’s prompt to an agent for intent analysis. The agent may reformulate the prompt (once or iteratively) into one or more targeted queries. It also decides whether external context is required to answer confidently. 2) Retrieve external context (when needed) If more data is needed, trigger retrieval across diverse domains, for example: Real-time user or session data Internal knowledge bases and documents Public/web sources and APIs Where MCP adds leverage: Domain-owned connectors: Each data domain exposes its own MCP server, defining how its data can be accessed and used. Built-in guardrails: Security, governance, and compliance are enforced at the connector boundary, per domain. Plug-and-play growth: Add new domains via standardized MCP endpoints—no agent rewrites, enabling independent evolution across procedural, episodic, and semantic memory layers. Open interfacing: Platforms can publish data in a consistent way for external consumers. Focus preserved: AI engineers concentrate on agent topology and reasoning, not bespoke integrations. 3) Distill & prioritize context Consolidate retrieved snippets and re-rank them with a stronger model than the embedder to keep only the most relevant evidence. 4) Compose the response If no extra context is required, or once context is ready, have the LLM synthesize the answer (or propose actions/plans) directly. 5) Verify before delivering Run a lightweight answer critique: does the output fully address the intent and constraints? If yes → deliver to the user. If no → refine the query and loop again. ♻️ Repost to help others become better system designers. 👤 Follow Kathirvel M and turn on notifications for deep dives in system architecture, scalability, and performance engineering. 💬 Comment with your MCP/Agentic RAG lessons or questions. 🔖 Save this post for your next architecture review. #AgenticRAG #MCP #ModelContextProtocol #RAG #LLM #AIEngineering #MLOps #SystemDesign #SoftwareArchitecture #Scalability #PerformanceEngineering #EnterpriseAI
To view or add a comment, sign in
-
-
This is a fantastic use for AI, providing detailed, easily digestible content to better understand the root cause, and to help with remediation steps.
Enjoy AI-driven database observability with DBmarlin. Quickly and easily understand database performance, so your databases run fast and stay fast! 🚀 And get started for free! Start now via the link in the comments 👇 #DBmarlin #FreeTrial #Monitoring
To view or add a comment, sign in
-
-
🚀 What powers your favorite apps behind the scenes? It’s not AI. It’s OLTP Systems. Every time you check your balance, place an online order, or send money — a silent hero ensures it all happens in milliseconds: Online Transaction Processing (OLTP) databases. 💡 These databases don’t just store data — they keep our digital world alive and consistent. Here’s why OLTP systems are the backbone of modern business: ⚡ Speed — They handle thousands of reads and writes per second. 🔄 Reliability — ACID properties (Atomicity, Consistency, Isolation, Durability) ensure your money doesn’t vanish mid-transfer. 🧩 Concurrency — Millions of users, one consistent truth. 🔐 Integrity — Each transaction tells a precise story—no duplicates, no lost updates. But here’s the catch 👇 Most businesses still abuse their OLTP systems for analytics — running heavy queries that slow down real-time transactions. That’s why understanding the difference between OLTP and OLAP is a data engineer’s superpower. 🏗️ The lesson? Separate your transactional and analytical worlds early. It’s the foundation of scalable data architecture. 👉 Do you think most companies truly optimize this separation — or are they still running analytics on production databases? #DataEngineering #OLTP #DataArchitecture #DataStrategy #BusinessIntelligence #DatabaseDesign #AI #BigData
To view or add a comment, sign in
-
Stop paying tokens to do ETL. With MCP, push fetch/transform/validate/cache into servers next to your data. Let the model plan over typed tools, not grind through text. Result: fewer tokens, faster responses, safer actions—same capability. If you had to cut 40% of LLM spend without losing features, what would you move to MCP first—RAG, analytics, or approvals? Why? #ModelContextProtocol #AI #Agents #MLOps #CostOptimization #OpenStandards #Enterprise
To view or add a comment, sign in
-
🚀 𝗗𝗲𝗰𝗼𝗻𝘀𝘁𝗿𝘂𝗰𝘁𝗶𝗻𝗴 𝘁𝗵𝗲 𝗠𝗖𝗣: 𝗣𝗶𝗹𝗹𝗮𝗿𝘀 𝗮𝗻𝗱 𝗧𝗵𝗲 𝗖𝗼𝗺𝗺𝘂𝗻𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗙𝗹𝗼𝘄 The Model Context Protocol (MCP) architecture enables AI agents to use external tools securely and reliably. It operates on a foundation of three core parts and a precise, multi-step communication loop. Ⅰ. 𝗧𝗵𝗲 𝗧𝗵𝗿𝗲𝗲 𝗣𝗶𝗹𝗹𝗮𝗿𝘀 𝗼𝗳 𝗠𝗖𝗣 The architecture separates responsibilities for scalability and security: ✨ 𝗔. 𝗧𝗵𝗲 𝗛𝗼𝘀𝘁 (𝗔𝗜 𝗔𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻) The chatbot or agent that decides which tool is needed but does not handle the low-level communication. 🧠 🛠️ 𝗕. 𝗧𝗵𝗲 𝗦𝗲𝗿𝘃𝗲𝗿 (𝗘𝘅𝘁𝗲𝗿𝗻𝗮𝗹 𝗧𝗼𝗼𝗹) The service (e.g., GitHub, Slack, Drive) that executes the actual task. 🤝 𝗖. 𝗧𝗵𝗲 𝗠𝗖𝗣 𝗖𝗹𝗶𝗲𝗻𝘁 (𝗧𝗿𝗮𝗻𝘀𝗹𝗮𝘁𝗼𝗿) The dedicated middle layer. Crucially, each Server requires its own Client (a one-to-one relationship) to keep communication channels decoupled. 🔗 Ⅱ. 𝗧𝗵𝗲 𝗦𝘁𝗲𝗽-𝗯𝘆-𝗦𝘁𝗲𝗽 𝗖𝗼𝗺𝗺𝘂𝗻𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗙𝗹𝗼𝘄 Here is how a request (e.g., "Check for new commits") travels through the system: * User to LLM: The User asks the Host. The Host sends the prompt to the LLM. 🗣️ * LLM Decision: The LLM realizes the answer requires an external Server (GitHub). 🧭 * Host to Client: The Host sends a high-level request to the designated MCP Client. ➡️ * Client Translation: The Client converts the request into the JSON-RPC language for the Server. ✍️ * Execution: The Client sends the structured request to the Server (GitHub), which performs the task. ✅ * Server Response: The Server returns a structured MCP response to the Client. ↩️ * Client Interpretation: The Client translates the Server's structured data back into a format the Host and LLM can understand. 📖 * Final Answer: The LLM uses the data to generate the final answer for the User. ✨ This architecture ensures the LLM focuses only on reasoning, while the Client manages all the complex tool-specific communication. What external tool in your workflow would benefit most from this standardized approach? 👇 #AIArchitecture #MCP #AIAgents #LLMs #TechDeepDive
To view or add a comment, sign in
-
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Head of Graph Intelligence & AI Integration | Data Governance Council Member at Wolters Kluwer | PhD
1w@memgraph It seems that there is a slight misalignment regarding the shift in topic due to a change in terminology. Your interpretation of the Enterprise Scheme appears to overlap significantly with both the Knowledge Graph and the Applied Knowledge Graph, particularly within the context of GraphRAG. While I appreciate your efforts to promote access control to the Applied Knowledge Graph (AKG), it’s important to clearly delineate these concepts to avoid confusion.