Forecasting is one important topic, but if you talk with your customers for example in the automation business, they want classification too. So we compared time series foundation models and their capability to do classification tasks. And yes, 🦖 performs very well. Great work by Andreas Auer , Daniel Klotz Sepp Hochreiter and Sebastian Böck The paper: https://lnkd.in/eK3bcJDj
NXAI
Technologie, Information und Internet
Linz, Upper Austria 6.422 Follower:innen
Accelerating Industrial AI
Info
AI made in Europe. Das europäische Large Language Model xLSTM NXAI betreibt Spitzenforschung im Herzen Europas mit dem Ziel, kompetitive KI-Technologien zu entwickeln.
- Website
-
https://www.nx-ai.com/
Externer Link zu NXAI
- Branche
- Technologie, Information und Internet
- Größe
- 11–50 Beschäftigte
- Hauptsitz
- Linz, Upper Austria
- Art
- Kapitalgesellschaft (AG, GmbH, UG etc.)
- Gegründet
- 2023
- Spezialgebiete
- Large Language Model, xLSTM, KI, Technologie, Forschung, Künstliche Intelligenz, Artificial Intelligence, Industry, AI und Simulation
Orte
-
Primär
Wegbeschreibung
Peter-Behrens Platz 2
Linz, Upper Austria 4020, AT
Beschäftigte von NXAI
-
Lukas Hauzenberger
Machine Learning | Data Science
-
Franka Hadadi (Godina)
Operations Leader & Chief of Staff | Building Operational Excellence for Scale-ups | Ex-Techstars Portfolio Manager | Vienna | Tel Aviv
-
Dániel Béres
Masters Student of Artificial Intelligence at JKU
-
Sepp Hochreiter
Professor at Johannes Kepler Universität Linz
Updates
-
The week starts with the number of the week. Over 1.3 million downloads from Huggingface. Amazing. 🤩 Thanks a lot. And today we're adding something else: TiRex as Docker 🐳 Link to our GitHub: https://lnkd.in/euvbPqhX
-
-
Join us for KI Punsch & Bowle, a relaxed and friendly AI meetup at the NXAI Office, bringing together curious minds, tech enthusiasts, and #AI professionals. Every three months, we explore one exciting AI topic with a guest speaker, followed by an open discussion, drinks, and plenty of networking. Everybody is welcome :) We start in November with Dr. Jan Koutnik. He “Czeched out” in 2009 to improve recurrent neural networks and scale evolutionary reinforcement learning as a postdoc at IDSIA. At NNAISENSE, he directed the division of the company responsible for the “learning to park” NeurIPS 2016 demo with Audi, the “learning to run” NeurIPS competition win in 2017, and the Festo Softhand in-hand manipulation demo at Hannover Messe in 2019. Now he has his own company focussing on Blue Collar AI. He explains us, what this means.
Dieser Inhalt ist hier nicht verfügbar.
Mit der LinkedIn App können Sie auf diese und weitere Inhalte zugreifen.
-
AI in the Forest 🌲 - we spend two days with over 30 participants at SICK and discussed the latest and greatest when it comes to AI. Thanks to Siemens, Beckhoff, KSB, Arburg, WITRON, Trumpf, MathWorks, Murr, VDMA and a lot more. Super happy to discuss GenAI Systems with Jakub Tomczak and continues learning. Great talk with Marco Huber on Industrial AI Use Cases and the main question how to generalize models for different applications. And we hold a workshop on Time Series Foundation Models 🦖 thanks for all the feedback. TiRex 2.0 is already in planning 😁 Big shoutout to Andreas Schachl - he opened with his speech about responsible annotation the first day. Great to see how to integrate people with autism in the AI market. Special thanks to Tim Fischer (SICK) for organizing the event.
-
-
-
-
-
+2
-
-
It's quite simple: Kajetan is right. Great work by Sebastian Lehner Maximilian Beck and Kajetan Schweighofer- you invested a lot in the research paper on scaling laws. 🥇 3.2×10²³ FLOPs later, the results clearly favour xLSTMs. Main findings: For Training 🏋♂️ At fixed FLOP budgets → #xLSTMs perform better At fixed validation loss → xLSTMs need fewer FLOPs For Inference ⚡ xLSTMs are faster than Transformers across all inference benchmarks xLSTMs are more energy-efficient xLSTMs are more cost-effective while having the same performance xLSTMs save money / energy Paper: https://lnkd.in/eG2m3wmC Code: https://lnkd.in/e4denP5b
-
-
A new kid is on the block Amazon researchers published a new leaderboard called: fev-bench: A Realistic Benchmark for Time Series Forecasting 🕛 and TiRex🦖 is in the lead "We introduce fev-bench, a forecast evaluation benchmark containing 100 tasks spanning 7 real-world application domains. Our benchmark addresses a key gap in existing work by including 46 tasks with covariates alongside both univariate and multivariate forecasting scenarios, better reflecting real-world forecasting use cases." Check the Leaderboard: https://lnkd.in/eH5MXPce Link: https://lnkd.in/esBWAkjP
-
-
Welcome, Werner Paulin! The NXAI family keeps growing! We’re thrilled to welcome Werner Paulin as our Director Business Development & Productisation. Werner will bridge the gap between our cutting-edge AI technology and the customers who need it most. Bringing 25 years of experience in industrial automation, Werner most recently led the introduction of the open automation platform NUPANO at Lenze. This platform enables machine builders to deliver digital services without requiring deep IT expertise – by combining operational technology and IT with lifecycle management and open standards. Now, Werner will bring his customer-centric approach to NXAI’s xLSTM-based AI solutions, ensuring they reach customers ready to transform their operations with intelligent, real-time, and scalable AI. Welcome to NXAI, Werner – excited to have you with us as we scale the next wave of industrial transformation! 🚀
-
-
Meta FAIR research on #xLSTM: Short window attention enables long-term memorization "...In this work, we introduce SWAX, a hybrid architecture consisting of sliding-window attention and xLSTM linear RNN layers." Result: Through an empirical analysis of hybrid architectures, we evidence the counter-intuitive fact that shorter sliding windows lead to better length-extrapolation on retrieval tasks. Moreover, we introduce a training procedure that stochastically changes the window size throughout training. This training procedure offers a strong performance on short-context tasks (enabled by longer sliding windows) and the length-extrapolation ability of Linear Attention layers, enabled by shorter windows at training time. Link to the paper: https://lnkd.in/eiR2r_z5 Congrats to Loïc Cabannes Maximilian Beck Gergely Szilvasy Matthijs Douze Maria Lomeli Jade Copet Pierre-Emmanuel Mazaré Gabriel Synnaeve and Hervé Jegou
-