Most of the solutions here assume you control the recording environment, which works well for async demos.
The harder case is live screen shares. If you're walking a client through something in real time and your terminal prints an env variable, or someone opens a config file mid-call to help debug, you can't pause to swap credentials.
The browser is actually a useful interception point for that specific case. Element-level pattern matching (sk-proj-, AKIA, Bearer tokens, key=value in .env format) can blur matching text in real time before it renders on screen. No environment isolation needed, no pre-production setup. Useful specifically because the exposure is transient and unplanned.
auv1107's fake data approach is right for planned async demos. cocodill's ephemeral credentials are right for API testing. Real-time browser-level detection only adds value for the live, uncontrolled session case, which is narrower but harder to solve with either of the other approaches.
Curious what the blurmate approach handles — recordings, live share, or both?
The concerning pattern is that the data-collecting ones actively hide what they're doing — the Similarweb-linked extensions apparently obfuscate with Base64 or AES-256 before sending.
Worth distinguishing from extensions that are genuinely client-side. A basic test: check the extension's manifest for network permissions (host_permissions). If it only requests the active tab and has no background network access, it physically cannot phone home. The inspection is 30 seconds in chrome://extensions.
The more insidious problem is that users can't easily distinguish between "this extension processes data locally" and "this extension processes data locally and also sends it somewhere." Same UI, very different behavior.
How much overhead did that add to your development workflow? I'm curious if building and maintaining that parallel demo infrastructure became its own project, or if it stayed lightweight.
Also, did you use this for investor demos specifically, or more for development/QA?
It had almost no workflow overhead. Remember - a data generator has essentially no overhead aside from any rules-based constraint on the data (how realistic it should be, what patterns it needs to have), and the format in which this data is stored (invariably a database). It had no interactions with any part of the main application, the only thing it touched was the database.
This let it be “simple” in terms of how it generated content, with it being “complicated” only in terms of what content it needed to create and its interconnections. Because patient profiles were simple to define, but were completely different than, say, the medications they were prescribed or the appointments that had been scheduled. Or the connections between appointments and prescriptions.
So yeah, generating data is simple, defining what data to be generated and in what patterns was a lot more difficult. Sometimes things that should be related could only be generated in isolation from each other because of how that part of the generation tooling was assembled.
This was almost 100% used by developers and QA. Outside demos had a special DB used by sales with much more consistent data, albeit much smaller. The generator was meant to create _large_ data sets, just not very _pretty_ data sets.
I thought this too initially - "just make the fake data look professional."
Where it broke down for me: investors with technical backgrounds would ask edge case questions ("show me how this handles 10K records" or "what does error handling look like with real load?"). The fake environment couldn't simulate that complexity authentically.
The other issue was muscle memory. When I'm demoing something I use daily, I'm fast and fluent. In a fake environment, I'd hesitate or click wrong because it's not my real workflow. Investors noticed.
Presumably the issue here is that you have customers with >10k records, but can't show them. Why not take their data and anonymize it, then put it under a fake customer?
> "what does error handling look like with real load?"*
I find it hard to believe that anyone is making an investment decision off of this question, but how would you demo this with a real customer anyway? Intentionally introduce a bug so that you can show them how errors are handled? Wouldn't the best course of action here be to just describe the error handling?
Thanks for the Mockaton suggestion! I like the API mocking approach - that handles the backend data cleanly.
The challenge I kept running into was the frontend side during live screen shares. Even with mocked APIs, I'd have credentials visible in browser tabs, notifications popping up with client names, or sidebar elements showing sensitive info.
Did you find Mockaton solved the full screen-share exposure problem, or did you combine it with other approaches?
2b. If it doesn't set a cookie (some SSO providers set it in `sessionStorage`), and assuming it’s a React app with an <AuthProvider>, you might need to refactor the entry component (<App/>) so you can bypass it. e.g.:
SKIP_AUTH // env var
? <MyApp/>
: <AuthProvider><MyApp/></AuthProvider>
Then, instead of using the 3rd party hook directly (e.g., useAuth). Create a custom hook, that fallbacks to a mocked obj when there's no AuthContext. Something like:
Interesting and smart approach - most noise generators are obviously artificial in their traffic patterns.
I've been thinking about browser privacy from a different angle: not hiding what you browse, but hiding what's visible on your screen when you share it. Screen sharing during video calls basically bypasses every privacy tool you have running (VPN, tracker blockers, etc.) because the other person sees your raw screen.
The layered privacy defense framing makes sense. This handles the ISP/tracking side. But what handles the "accidentally showed my email to my entire team during a screen share" side? Different threat model but equally common.
This is a smart workflow. I've been doing something similar (record screen, then manually write docs) and the AI approach saves hours.
One thing that still trips me up though - the prep before hitting record. I spend like 10 minutes closing personal tabs, clearing browser history, making sure nothing sensitive is visible on screen. By the time I'm "ready" to record, I've lost the spontaneous energy that makes demos feel natural.
Does your tool handle any of that? Like auto-detecting sensitive content before processing or helping sanitize the recording after? Or is it assumed you're recording in a clean environment?
Curious because the actual documentation generation is only half the workflow. The "setup tax" before recording is the part that kills my momentum.
Congrats on shipping this. The 15 minute turnaround is impressive.
Thank you! yes, totally agree, but i think it is more important in case of demo videos and not the process recording -> documentation, where the written output is the biggest value.
That said, i still agree there might be some sensitive parts that gets into screenshots of the documentation, so in the future we plan to add some sensitivity detection and screenshot blurring options, but for now yes it is better to prepare clean environment. Thanks again for the thoughtful comment!
Nice execution. The toggle visibility approach is intuitive for live coding.
One use case you might not have considered: pairing this with a persistent "demo mode" profile in VS Code. I've been experimenting with a separate workspace that auto-loads this extension plus other privacy settings.
Have you thought about expanding beyond config files? Sometimes git commit messages in the terminal include sensitive issue IDs or customer names, database connection strings leak in SQL output, or file paths reveal internal org structure. The config file approach is solid but I'd pay for a "demo mode" extension that handles all those edge cases.
What's the performance impact on large config files (like 1000+ line .env files)? Does it parse in real-time or cache the redaction?
Really clean approach to the pre-call panic. The dual reality concept is smart.
I've been working on a similar problem from the browser side (since most of my sensitive stuff lives in tabs - email, Slack, Notion). Different angle but same pain point.
Curious about the multi-monitor scenario - if I'm sharing Screen 1 but a cloaked window spans both monitors, does it cloak the whole window or just what's visible on the shared screen?
Also wondering about the "forgot to cloak" problem. I know I'd 100% forget to toggle before a call at least once a week. Any plans for persistent rules like "always cloak windows matching X pattern"? The mental overhead of remembering seems like it could be a blocker.
Congrats on shipping - the pain point is very real. Have you seen any performance hit on the capture stream itself? Some screen share tools get weird when windows are programmatically hidden.
Thanks for the feedback! Since the Favorites feature is already live in your settings, here is a punchy, brief reply without any dashes:
The Reply
Thanks! The browser side approach is a great counterpart to this. Here is the technical breakdown:
Multi-Monitor Logic Since I target the Window ID via the Windows API, the cloak follows the window anywhere. It remains invisible to the capture buffer whether it is on Screen 1, Screen 2, or spanning both.
The Forgot to Toggle Problem I actually already solved this. There is a feature in the Settings now where you can favorite apps to be hidden by default. Once tagged, they cloak automatically the moment you launch them so there is no manual toggle needed.
Performance I built this natively in Rust using a specific flag at the compositor level to exclude windows from the capture stream. It is much lighter than real-time blurring and avoids the black box flickering issues common in other tools.
Appreciate the congrats. The pain point is definitely real!
The browser-vs-app debate aside, the bigger issue for me is what happens once you're in the call.
Screen sharing on Zoom (or any platform) is where the real anxiety kicks in. One wrong tab, one notification, one bookmarked URL you forgot about.
I've moved to keeping a completely separate browser profile for calls - nothing in it except the meeting app. Overkill? Maybe. But it eliminates the pre-share panic entirely.
The harder case is live screen shares. If you're walking a client through something in real time and your terminal prints an env variable, or someone opens a config file mid-call to help debug, you can't pause to swap credentials.
The browser is actually a useful interception point for that specific case. Element-level pattern matching (sk-proj-, AKIA, Bearer tokens, key=value in .env format) can blur matching text in real time before it renders on screen. No environment isolation needed, no pre-production setup. Useful specifically because the exposure is transient and unplanned.
auv1107's fake data approach is right for planned async demos. cocodill's ephemeral credentials are right for API testing. Real-time browser-level detection only adds value for the live, uncontrolled session case, which is narrower but harder to solve with either of the other approaches.
Curious what the blurmate approach handles — recordings, live share, or both?