Skip to main content

Chasing performance drifts

· 13 min read
Nicolas Dubien
Nicolas Dubien
Software Engineer

You've probably been there. You switched from one tool to another because things got slow. Remember: Webpack, then RsPack or Vite? Jest, then Vitest? Bun? tsgo? Speed is why most of them exist. But performance isn't only a developer problem, our users feel it too.

The trouble with performance is that it drifts slowly. Nobody complains until the day they churn and move to a competitor. At Pigment, performance is in our DNA. We want the platform to stay fast, not just fast at day one.

This article is about how we protect our platform from performance regressions and how we chase performance drifts at Pigment.

We'll walk through three real regressions we caught and fixed. As with many performance regressions, they stayed invisible for a long time until they weren't anymore. For each one, we'll cover both the fix and the trap we set to make sure it could never silently come back.

Impersonation Done Right: Tokens, Read-Only Guarantees, and Audit Trail

· 13 min read
Aymeric Bouzy
Aymeric Bouzy
Software Engineer

If you build enterprise SaaS with any kind of permissions model, you've heard this feature request before: "Our admins need to see what a user sees." The motivation is always the same — a user reports they can't access something, or they're seeing the wrong data, and the admin has no good way to verify what's going on from the user's perspective.

The workarounds your customers resort to are painful. They ask their users for screenshots, or schedule screen shares, but these are high-friction interactions. They try to reproduce the issue with test accounts, but test accounts never have the same permissions, the same data, or the same organizational context as the real user.

So sooner or later, the feature request lands on your backlog: let admins log into the app as another user — see the same dashboards, the same data, the same permissions. Even before users complain about missing access.

But the moment you say "admin impersonation" in an enterprise context, alarm bells go off. Can the admin modify the user's data? Can they access private information? Is there an audit trail? How do you prevent this from becoming a backdoor?

These are fair concerns. At Pigment, we built a user-facing impersonation mode that lets admins review what users see safely. It's been in production for a while now, and our customers love it. Here's how we designed it.

Being an Engineering Program Manager at Pigment — Podcast (in French)

· 2 min read
Matthieu Allegre
Matthieu Allegre
Engineering Program Manager
Mélissa Djuimo
Mélissa Djuimo
Engineering Program Manager

We’re delighted to announce that Mélissa Djuimo and Matthieu Allegre, two of our Engineering Program Managers (EPM), were invited to the Tech Lead Corner podcast to share their experience and discuss the specific EPM role at Pigment⁠.

🎧 Tech Lead Corner | Mélissa Djuimo and Matthieu Allègre - Being EPM at Pigment.

How We Turbocharged Our Prediction Pipeline Using Dask

· 8 min read
Florian Courtial
Florian Courtial
Software Engineer

At Pigment, we provide a central platform for organizations to model and plan using their most valuable data. As part of that mission, enabling predictive insights is key. Our first implementation of predictions worked well for smaller datasets, but as more users began applying the feature to larger and more complex data models, we quickly hit the limits of a single-machine setup in terms of memory and compute.

To address this, we redesigned our infrastructure around a distributed Dask cluster, allowing us to scale horizontally and support much larger workloads. In this post, we’ll share the architecture of our new solution and key takeaways from adopting Dask in production.

Agentic AI for data analysis in Pigment

· 7 min read
Sergey Arsenyev
Sergey Arsenyev
Data Scientist

Agentic AI, in which one or multiple AI agents can pursue goals autonomously or collaboratively, represents a new frontier in generative AI. Unlike traditional large language models (LLMs) that operate independently, these systems combine specialized agents with unique roles to handle complex, open-ended tasks. Here, we’ll explore how Pigment uses this approach for data analysis, empowering clients to gain deeper insights.