📝 Announcing our paper that proposes a unified cognitive and computational framework for Artificial General Intelligence (AGI) -- going beyond token-level predictions -- one that emphasizes modular reasoning, memory, agentic behavior, and ethical alignment 🔹 𝐓𝐡𝐢𝐧𝐤𝐢𝐧𝐠 𝐁𝐞𝐲𝐨𝐧𝐝 𝐓𝐨𝐤𝐞𝐧𝐬: 𝐅𝐫𝐨𝐦 𝐁𝐫𝐚𝐢𝐧‑𝐈𝐧𝐬𝐩𝐢𝐫𝐞𝐝 𝐈𝐧𝐭𝐞𝐥𝐥𝐢𝐠𝐞𝐧𝐜𝐞 𝐭𝐨 𝐂𝐨𝐠𝐧𝐢𝐭𝐢𝐯𝐞 𝐅𝐨𝐮𝐧𝐝𝐚𝐭𝐢𝐨𝐧𝐬 𝐟𝐨𝐫 𝐀𝐫𝐭𝐢𝐟𝐢𝐜𝐢𝐚𝐥 𝐆𝐞𝐧𝐞𝐫𝐚𝐥 𝐈𝐧𝐭𝐞𝐥𝐥𝐢𝐠𝐞𝐧𝐜𝐞 𝐚𝐧𝐝 𝐢𝐭𝐬 𝐒𝐨𝐜𝐢𝐞𝐭𝐚𝐥 𝐈𝐦𝐩𝐚𝐜𝐭 🔹 In collaboration with University of Central Florida, Cornell University, UT MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Toronto Metropolitan University, University of Oxford, Torrens University Australia, Obuda University, Amazon others. 🔹 Paper: https://lnkd.in/gqKUV4Mr ✍🏼 Authors: Rizwan Qureshi, Ranjan Sapkota, Abbas Shah, Amgad Muneer, Anas Zafar, Ashmal Vayani, Maged Shoman, PhD, Abdelrahman Eldaly, Kai Zhang, Ferhat Sadak, Shaina Raza, PhD, Xinqi Fan, Ravid Shwartz Ziv, Hong Yang, Vinija Jain, Aman Chadha, Manoj Karkee, @Jia Wu, Philip Torr, FREng, FRS, Seyedali Mirjalili ➡️ 𝐊𝐞𝐲 𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬 𝐨𝐟 𝐓𝐡𝐢𝐧𝐤𝐢𝐧𝐠 𝐁𝐞𝐲𝐨𝐧𝐝 𝐓𝐨𝐤𝐞𝐧𝐬' 𝐂𝐨𝐠𝐧𝐢𝐭𝐢𝐯𝐞‑𝐂𝐨𝐦𝐩𝐮𝐭𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐀𝐆𝐈 𝐅𝐫𝐚𝐦𝐞𝐰𝐨𝐫𝐤: 🧠 𝐅𝐨𝐮𝐧𝐝𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐅𝐫𝐚𝐦𝐞𝐰𝐨𝐫𝐤: Integrates cognitive neuroscience, psychology, and AI to define AGI via modular reasoning, persistent memory, agentic behavior, vision-language grounding, and embodied interaction. 🔗 𝐁𝐞𝐲𝐨𝐧𝐝 𝐓𝐨𝐤𝐞𝐧‑𝐏𝐫𝐞𝐝𝐢𝐜𝐭𝐢𝐨𝐧: Critiques token-level models like GPT-4.5 and Claude 3.5, advocating for test-time adaptation, dynamic planning, and training-free grounding through retrieval-augmented agentic systems. 🚀 𝐑𝐨𝐚𝐝𝐦𝐚𝐩 𝐚𝐧𝐝 𝐂𝐨𝐧𝐭𝐫𝐢𝐛𝐮𝐭𝐢𝐨𝐧𝐬: Proposes a roadmap for AGI through neuro-symbolic learning, value alignment, multimodal cognition, and cognitive scaffolding for transparent, socially integrated systems.
Love this move away from predicting the next word—finally, someone framing AGI in terms of agency and memory rather than just bigger datasets. Curious how your framework handles the messy ambiguity of real-world values when aligning ethics at scale—are we talking human-in-the-loop, or something bolder?
Loved this paper Vinija Jain
without understanding consciousness, enlightenment, hard problem of consciousness we will never get AGI.
Bridging cognitive science with modular, verifier-driven architectures feels like a realistic path past token-prediction ceilings. Excited to see how the proposed neuro-symbolic layers and built-in alignment mechanisms evolve.
This is an exciting and necessary shift - moving from token prediction to true cognitive scaffolding is exactly what AGI demands. The integration of modular reasoning, memory, and agentic behavior acknowledges what many token-based systems currently miss: that intelligence isn’t just prediction - it’s persistence, context, and goal-oriented coherence. From a psychology lens, this also opens a deeper conversation: Are we finally designing machines with a theory of mind, or at least the scaffolding to simulate one? And if so, how do we ensure these “minds” remain aligned, interpretable, and ethically grounded? This kind of interdisciplinary, cognition-aware AGI roadmap is essential - especially if we want to build systems that don’t just perform tasks, but understand the world they’re embedded in. Looking forward to digging into the paper - congratulations to all involved.
Awesome Vinija Jain
Love the formatting and the text blocks! Amazing work Vinija Jain !
This is an exciting step toward rethinking AGI beyond next-token prediction! The emphasis on modular reasoning and persistent memory resonates strongly with how humans actually learn and adapt. Curious to hear your thoughts on how this framework might influence current agentic architectures — especially in real-world tasks requiring long-term context and ethical alignment.
Principal AI Engineer @WeBuild-AI | Data Science, Machine Learning, NLP, Gen AI
3moVinija Jain very interesting! For me AGI is above all consciousness. I do believe Nobel laureate Roger Penrose was correct to point out Gödel’s theorem as proof that consciousness is not an algorithmic process. That means that all the models coming from algorithmic approaches (bottom up or top down) will never achieve consciousness and thus no AGI. Furthermore, again Penrose with Hoffman suggested that the microtubules inside the neurons can withstand quantum coherence (because of symmetrical structure) and since in physics the collapse of the wavefunction is non computational is possible that consciousness IS the collapse of the wavefunction happening because of gravity (proposed by Penrose) inside these structures.