SigNoz’s cover photo
SigNoz

SigNoz

Software Development

San Francisco, CA 6,288 followers

Open source Observability platform

About us

Open source Observability platform

Industry
Software Development
Company size
11-50 employees
Headquarters
San Francisco, CA
Type
Privately Held
Specialties
Observability, Application Monitoring, Log Management, and DevOps

Locations

Employees at SigNoz

Updates

  • View organization page for SigNoz

    6,288 followers

    Our Cloud Teams plan now starts at $49/month, making world-class observability accessible to every engineering team building in the age of AI. At this new price, you get every feature previously included in the $199 plan: unified logs, metrics, traces, OpenTelemetry-native pipelines, and full collaboration features. No feature cuts, no hidden fees. Just simple, predictable pricing. With LLMs and AI-powered coding tools becoming the new normal (in the latest YC batch, up to half of startups primarily write code using AI), the need for robust, real-time observability has never been greater. Legacy tools remain out of reach for most early teams, but we’re committed to changing that. We’re proud to continue our mission to democratize observability for builders everywhere.

    • No alternative text description for this image
  • Instrumenting your Next.js app with OpenTelemetry doesn’t have to be complicated. We wrote a step-by-step guide to help you set up observability in your Next.js app using OpenTelemetry and export data to SigNoz. What’s covered: 1/ Setting up OpenTelemetry in a Next.js project 2/ Tracking API routes and client-side interactions 3/ Exporting traces to SigNoz 4/ Full working example in TypeScript If you're building with Next.js and want better observability, this guide is for you.

    • No alternative text description for this image
  • Building with the Vercel AI SDK? You can now monitor your entire LLM pipeline using SigNoz. No extra infra. Just observability that works out of the box. What you get: 1/ Prompt-level metrics 2/ Input-output visibility with traces 3/ Token usage and cost tracking 4/ Compatible with experimental_tools in the SDK Built for developers who want better visibility into their AI apps without friction.

  • Elizabeth Mathew from our team built an MCP server for observability. Here's what she found about the limitations. Root cause analysis is hard to solve but easy to verify once you have an answer. MCP servers generate hypotheses, but verifying them often takes as much effort as manual investigation. Research shows LLMs get 44-58% accuracy in incident response, while human SREs hit 80%+. The gap gets worse for unfamiliar issues. The math problem: if an LLM makes 8 tool calls with 3 possible interpretations each, that's 3^7 reasoning paths (over 2,000). Any wrong turn compounds through the chain. MIT research showed this clearly: an LLM trained on NYC maps gave perfect directions until roads changed. No real internal model, just pattern matching. Same thing happens in observability when systems change or new failures emerge. MCP is great for structured tasks like converting questions to PromQL. For complex debugging, it generates hypotheses that still need human verification.

    • No alternative text description for this image
  • We documented a complete Kubernetes observability setup using OpenTelemetry and two collector deployment strategies. The approach uses dual collectors for comprehensive coverage: - DaemonSet collector on each node - kubelet metrics, container logs, local OTLP endpoint - Deployment collector for cluster-level data - Kubernetes metrics, events, cluster state DaemonSet handles per-node telemetry: CPU/memory usage per pod, container logs from /var/log/pods, and serves as local trace/metrics endpoint for applications. Deployment collector focuses on cluster-wide insights: total pod counts, deployment states, container restart metrics, and Kubernetes events for troubleshooting. Uses official OpenTelemetry Helm charts with presets like kubeletMetrics, logsCollection, and clusterMetrics for easy configuration. No manual receiver/processor setup needed. Tested on Minikube with the OTel astronomy shop demo, but scales to any Kubernetes environment.

  • SigNoz reposted this

    View profile for Vishal Sharma

    Building SigNoz(YC W21) | Observability | Product

    We're Hiring a Founding Technical Support Engineer at SigNoz! Are you a DevOps engineer passionate about observability, OpenTelemetry, and helping other engineers succeed? Join our rapidly growing open-source SaaS startup! You'll debug complex distributed systems issues, build our support infrastructure from scratch, help customers optimize their observability setups, and scale into leadership - all in a fully remote environment! Perks: 🏠 Fully Remote (India) 💰 Competitive compensation + ESOPs 🚀 Founding Team Member 🦾 Open Source Product (22K+ stars) 📈 Fast-Growing Team & Industry 👥 Clear Path to Head of Support or Product Roles 🏥 Health Insurance 🌍 Semi-Annual Offsites Sounds interesting? Comment below or apply through the link in comments! #TechnicalSupport #DevOps #OpenTelemetry

  • We cut log query performance by 99.5% by rethinking how we store data in ClickHouse. Problem Logs from different pods/services were randomly mixed in storage. A namespace query scanned 41,498 out of 41,676 blocks (99.5%). Solution Resource fingerprinting. We generate deterministic fingerprints (cluster + namespace + pod) and sort by this in the ClickHouse primary key ORDER BY clause. Result Same namespace query now scans 222/26,135 blocks (0.85%). Primary key index eliminates 25,913 blocks. Works for Kubernetes, Docker, AWS CloudWatch.

    • No alternative text description for this image
  • Changelog v0.90.0 - JSON Flattening in Log Pipelines You can now flatten complex nested JSON logs into simple, queryable structures with configurable depth and custom field mapping. Think log parsing, but for your deeply nested JSON nightmares. The release also includes: 1/ New datasource docs in onboarding (ELK migration, WordPress, Cloudflare, OpenAI) 2/ Ingestion keys restored for self-hosted users 3/ Light mode fixes for alert channels 4/ Layout shift fixes in logs explorer 5/ AWS ElastiCache dashboard corrections Full changelog and migration guide in the comments below.

    • No alternative text description for this image
  • We've documented a complete implementation guide for adding observability to Next.js applications using OpenTelemetry. The series covers: 1/ Server-side tracing for API routes and SSR performance 2/ Monitoring 404 errors, external API calls, and exception handling 3/ Client-side Web Vitals and component performance tracking 4/ Structured logging with trace correlation 5/ Production deployment, sampling, and scaling strategies Each tutorial includes working code examples and covers both Vercel and self-hosted deployments. The implementation uses OpenTelemetry for instrumentation and SigNoz for visualization. By the end, you'll have distributed tracing across your frontend and backend, structured logging, performance monitoring, and production-ready alerting configured.

    • No alternative text description for this image

Similar pages

Browse jobs

Funding