I tried using Meta AI to privately process a short Yoruba message before texting my mom. Instead of refining the text, it simply repeated the same sentence three times. This experience revealed something much larger about the current state of artificial intelligence. Despite the breathtaking progress in large-scale models, AI systems still struggle to understand local context, nuance, and culture. A billion-parameter model trained mostly on English text can simulate intelligence, but it often fails to comprehend meaning where local identity, dialect, or social norms matter most. That experience reaffirmed a conviction I’ve held for some time: the next leap in AI will come not from building ever-larger models, but from building localized small language models (SLMs)—AI systems designed to understand the languages, traditions, and lived realities of the communities they serve. If we want AI that truly benefits people, we must first invest in AI infrastructure for local intelligence. Once that foundation is strong, we can build federated AI infrastructure that connects those localized models—sharing insights, not raw data, across borders and industries in compliance with local and international laws. The countries and institutions that get this right will lead the next wave of AI innovation. The future of AI will not just be intelligent—it will be locally fluent, culturally aware, and globally connected. #LocalAIGov #SLMs #FederatedLearning #LanguageTech
I've been using AI to generate images for presentation slides. It's interesting how the prompt "professional" always seems to default to white men 😒
Not sure the Tower of Babel is the best solution. In the universe we live in there is an objective reality that is understandable or at least we with science try to understand its principles and laws. The truest grounding is when our models reflect objective reality. Even in human defined concepts not grounded in objective material or energy reality, common knowledge evolved over thousands of years, agreements, accounting principles, and many other standards. If there is too much transformation of meaning due to social context or cultural then there is a danger of creating silos of meaning with barriers to understanding. The goal is to be consistent at the agreed intersections of common meaning among ontologies. This does not prevent different subtle difference of scope and meaning of an ontology class but it should be clear that its scope and and level of detail is defined in each ontology it exists in but there is some common intersection of meaning across ontologies. This creates a common meaning with specializations that narrow or broaden its description to those dimensions relevant to the purpose of the ontology. Yet there may be very special concepts using the same word but with different meanings, controlled vocabs help
Michael Akinwumi I agree with what you have said. I have created white papers around not only the development and deployment of sovereign AI and cloud infrastructure for tribal nations and under served communities; let’s connect.
Great you mentioned this. I joined an event hosted by Stanford last week and this topic did come up about the lack of diversity of languages represented in models.
Chief AI Officer, Head of Responsible AI Lab & Rita Allen Alum. @datawumi on X. Advisor. Personal Opinion.
3dX link to the Yoruba text: https://x.com/datawumi/status/1981924311695868091?s=46