oder
Wenn Sie auf „Weiter“ klicken, um Mitglied zu werden oder sich einzuloggen, stimmen Sie der Nutzervereinbarung, der Datenschutzrichtlinie und der Cookie-Richtlinie von LinkedIn zu.
Neu bei LinkedIn? Mitglied werden
Lausanne Area, Switzerland
Lausanne, Vaud, Switzerland
Ankara, Turkey
2013–2018
2010–2012
2005–2010
2001–2005
TÜBİTAK - 110E029
Product Owner at Acodis
Developing AI Agent Technology at NVIDIA
Investor | Co-founder, BOD & Strategic Advisor @Futurae | PHD in CS/Security
Co-Founder & CTO at epyMetrics
Expert at Innosuisse
2x Co-Founder ▶︎ Leading Tech Cross-Functional Teams ▶︎ EMEA markets ▶︎ MedTech ▶︎ FinTech ▶︎ DeepTech ▶︎ Web3 ▶︎ CyberSecurity
Marketing and branding consultant 👩🏼💻Podcast host 🎙️Event manager 🚀 Agency owner 📚Mentor 💻
Founder & Lead Software Engineer at Binarium GmbH
Phu Ngo
𝗧𝘂𝗿𝗻 𝗮𝗻𝘆 𝗽𝗼𝗿𝘁𝗿𝗮𝗶𝘁 𝗶𝗻𝘁𝗼 𝗮 𝘁𝗮𝗹𝗸𝗶𝗻𝗴 𝘃𝗶𝗱𝗲𝗼 𝗶𝗻 𝗖𝗼𝗺𝗳𝘆𝗨𝗜! Workflow below ↓ Been testing 𝗠𝘂𝗹𝘁𝗶𝗧𝗮𝗹𝗸 from MeiGen AI - it creates perfect lip-sync videos from static images and an audio input instead of basic face swapping or deepfakes. 𝗞𝗲𝘆 𝗖𝗮𝗽𝗮𝗯𝗶𝗹𝗶𝘁𝗶𝗲𝘀: → Analyzes facial features and audio patterns → Generates natural expressions and lip movement → Supports multi-character conversations → Works with photos, artwork, and digital characters 𝗪𝗵𝗮𝘁 𝘄𝗼𝗿𝗸𝘀 𝗯𝗲𝘀𝘁: - Clear front-facing portraits - 512px+ input resolution - Clean audio with distinct speech Comment "𝗧𝗔𝗟𝗞" below, and I'll send you the complete workflow with all required models.
Vahe Aslanyan
When a misplaced character turns flawless code into a head-scratcher, it’s a stark reminder that brilliance isn’t just in flashy new tools—it’s in sound engineering. In today’s fast-paced tech arena, a new breed of code partner is emerging, one that reimagines AI assistance by weaving together time-tested software principles with cutting-edge language models. Picture algorithms that transform prompts into skeletal abstract syntax trees, run brisk simulations to weed out errors, and refine outputs until your code compiles like clockwork. While lateral leaps in AI might steal the spotlight, they still depend on your seasoned eye to spot anomalies and sneaky pitfalls invisible to brute token guessing. It’s that blend of disciplined fundamentals and relentless innovation that transforms mere drafts into robust, production-ready code. The journey isn’t about being first—it’s about building systems that mirror the precision of a well-tuned engine, where every component interacts seamlessly from static inference to dynamic test cycles. True mastery comes when you let AI handle the initial heavy lifting, then step in with your cross-disciplinary savvy to polish and integrate the work. When your automated assistant pairs with your engineering instincts, you’re not just generating code; you’re crafting resilient, scalable systems that endure beyond any meme-worthy mischief. Ready to evolve beyond the hype and build lasting software? Follow @LunarTech for blueprints that turn innovative ideas into enduring engineering excellence.
Lukasz Mirocha, PhD 🔜 SIGGRAPH Asia Hong Kong
[4DGS STREAMING] Can you stream volumetric video feed (3DGS) onto mobile devices in real-time? The LiveGS project, which I have seen at SIGGRAPH 2025 Emerging Technologies proves it is possible. LiveGS benchmarks: ✅ Over 30 FPS on iPhone 15 ✅ Bitrates under 20 Mbps ✅ Latency under one second (see the video) This free-viewpoint video (FVV) live-broadcasting system solves the bottleneck challenges in streaming 3D Gaussian Splatting content to mobile devices. For creators, educators, and professionals working with spatial media, research like this opens new possibilities for live volumetric broadcasts accessible on everyday devices in not so distant future. Check out the paper link in the comment. Contributors: Yuzhong Chen ByteDance Inc., Beijing, China; Yuqin Liang ByteDance Inc., Shanghai, China; Zihao Wang ByteDance Inc., Hangzhou, China; Danying Wang ByteDance Inc., Shanghai, China; Cong Xie ByteDance Inc., Hangzhou, China; Shaohui Jiao ByteDance Inc., Beiijing, China; Li Zhang ByteDance Inc., San Diego, USA.
Mick Mahler
I just automated the entire process of creating hyperrealistic consistent AI character datasets - completely free on your own hardware. The workflow generates a complete training dataset with high-resolution upscaled images in different settings and poses, all with detailed captions. Want different poses? Load a reference. Want different outfits? Use virtual try-on. Here's why this matters: Tools like NanoBanana and Seedream 4.0 produce great results, but they're expensive, censored, and every image costs money. Your data lives in their cloud. You're dependent on their API staying online. And you have zero control over pricing changes or access. This workflow changes that. Everything runs locally using free, open-source models. Zero ongoing costs. No censorship. Complete creative freedom. Your characters, your data, your hardware. Train once, use forever. I spent months testing every approach to match commercial quality without the constraints. The breakthrough was Qwen Image Edit with automated dataset generation. From there I added LoRA training for Flux and Wan 2.2, upscaling, and HD video generation.
Zustimmen und LinkedIn beitreten