element/damage: Better support effects on framebuffer contents#1910
element/damage: Better support effects on framebuffer contents#1910
Conversation
|
Haven't looked at the code, but I've got a question from the description: does this help with the case where blur needs to track damage slightly further than directly below the blur element? Since blur samples pixels from slightly outside. Or is this solved by having the blur element be slightly bigger itself? |
Yes, you can just increase the size of the My 2 cents on the matter: The implementation linked above I doesn't do this. I noticed even MacOS, which has different blur effects all over the place, doesn't implement this correctly. E.g. moving a window right next (pixel perfect) to a colored line doesn't make the color show up in the blur until the window overlaps with said line at least partially. This lead me to lazily implement kawase-blur with So while this can be solved with this implementation, I don't think it really is necessary. (And you need to use something like |
|
Thanks.
That's an interesting detail. Definitely makes it easier yeah. |
| fn capture_framebuffer( | ||
| &self, | ||
| frame: &mut R::Frame<'_, '_>, | ||
| dst: Rectangle<i32, Physical>, | ||
| ) -> Result<(), R::Error> { |
There was a problem hiding this comment.
So since you only get a frame here and no renderer, that means you cannot create GlesTextures with their automatic cleanup, right? And have to manage the destruction manually somehow (relevant for pooling/reusing textures).
There was a problem hiding this comment.
Yes, that is one of the things I am not so happy with here.
As you can see the cosmic-comp blur implementation currently uses a lot of raw GL code to side-step that issue, but yeah you don't get access to the destruction-channel this way.
Given this feature most likely needs a Blit anyway, I thought we have BlitFrame at least, but that also needs a Framebuffer and now you have to deal with storing a framebuffer and the texture in the element, making it self-referencial...
But think this is a bigger issue with the whole abstraction not necessarily something that needs to be fixed in this PR.
As for possible solutions I am not sure. We could implement more renderer-traits for the GlesFrame or possibly allow sub-Frames and then treat the whole thing more like a guard and abandon the strict separatation between Renderer and Frame? (related: #1750)
|
One change that tripped me up a little is that since is_framebuffer_effect() has a default impl, nothing tells you that you need to update your custom wrapper elements to pass through this new function. Something for the release notes I guess |
|
In the cosmic-comp BlurElement impl, how does capture_framebuffer() handle the output transform? In winit when the transform is 180, the glBlitFramebuffer call blits from wrong coordinates for me. |
|
Perhaps current transform needs to be passed into capture_framebuffer() too? |
|
Found a potential issue with framebuffer effect, though not sure if it might be a mistake in my code. When it's covered by some other opaque window, and I move the covering window outside, it doesn't get redrawn: fb-effect-damage-issue.mp4 |
Pushed two commits that should fix the issue you are seeing here, please test. |
Currently it doesn't. We should probably pass the output-transform. You could also receive the projection matrix of the |
d30b6dc to
7f80cd8
Compare
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## master #1910 +/- ##
==========================================
- Coverage 18.77% 18.39% -0.39%
==========================================
Files 180 182 +2
Lines 28743 28974 +231
==========================================
- Hits 5397 5329 -68
- Misses 23346 23645 +299
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
Example way to repro on current latest commit: fb-effect-damage-issue-2.mp4 |
|
Could it be that the following causes the issue?
|
This overlaps with the framebuffer-effect element, so the whole intersection of the framebuffer-effect element and the output geometry gets added to the damage.
All below it should be re-drawn in the same cycle. The new logic also ignores opaque regions on top of the framebuffer-effect element now, if any part of it (and thus the whole thing) got damaged.
Yes, that is what happened before. |
|
I just thought of another fun edge case where a framebuffer effect may end up with wrong contents: drawing to a different damage tracker (e.g. offscreens or screencasts). The intended use is for capture_framebuffer() to write into some texture stored in the Element. So imagine drawing to the main damage tracker, then drawing to a different damage tracker once (e.g. one window screencast frame). This other damage tracker will call capture_framebuffer() which will update the texture (stored on the Element). Then, we draw back to the main damage tracker, with something changed on top of the element. There were no changes below, as far as the main damage tracker is concerned, so capture_framebuffer() is not called. But draw() is called, and it ends up drawing texture contents from the window screencast frame, rather than the intended ones from the main scene. |
cb32785 to
ed9212b
Compare
Yeah caching the texture in the Element becomes problematic, if you use the same element across different framebuffers/damage-trackers. I guess this specifically becomes and issue with niri's block-on-screencast feature, right? I am not sure how the code could address this without more information. The issue isn't really the different damage tracker, but that it potentially captured different contents, due to rendering for different intents, no? Ideally it could remember the id's of the elements it captured somehow, so on the next render it would know, that there are different contents below instead of relying on the damage-tracker to store that information, but that seems not easily achievable. Any ideas? |
|
Since all issues in the damage-tracking algorithm seem to be addressed, I updated the PR again. I dropped the |
Yeah. I think in niri I'll just make sure to use different copies of the element for different damage trackers, but it's still a bit error prone because you can't check that an issue is happening other than stumbling upon it.
I think the issue is specifically different damage trackers, because the logic and state on whether a recapture needs to happen lies inside the damage tracker, and if you use two different ones, they don't share this info among each other.
Not sure tbh, one workaround could be to have some kind of unique damage tracker id that the element could test (similarly to how elements can test the renderer context id), but idk how good an idea this is. And then again, the decision to recapture is done by the damage tracker and not by the element. |
Not a bad idea. imo either the object holding the capture-state (the damage-tracker) needs to also cache the result (the texture) or the caching needs to be able to somehow reliably invalidate itself or split the cache into two. And we definitely want to encourage caching the effect result. I guess the id could cause trouble, if the damage-tracker is destroyed later and the texture starts to leak potentially? Storing the texture in the damage-tracker also has it's problems, how is it passed to the draw call? I wonder if we have the same issue somewhere without framebuffer-effect, but I guess just with damage this problem doesn't come up. |
|
What about extending the whole rendering including the damage tracker with some external state context where the cached texture could be stored in. The context could be managed on a per use-case base, like output rendering, sceencapture and so on. Dropping the context would clean-up all cached textures. |
While I don't dislike the idea, I am failing to imagine the exact API for this. It seems very intrusive with everything from the DamageTracker's damage and render methods, Do you have other uses for this in mind? Should we also refactor the imported wl_buffer textures to use that context? I guess that would make it easier to do the import asynchronously from rendering, but given the textures are renderer-specific it is not like you can suddenly rely on textures being available, just because you use the same context. Would the context thus be specific to a I am struggling with with potentially adding a somehow overengineered feature, if there isn't a use-case for it outside of optimizing blur. |
|
Hmm, another point for consideration. Currently frame buffer effect elements won't properly work if several clones of an element are present in the render list, because all copies will overwrite the same framebuffer texture stored in them (assuming it's shared via Arc/Rc among all of them, e.g. a GlesTexture). Where this comes up is e.g. niri's overview which draws background/bottom layer-shell multiple times (once per visible workspace). |
|
Added a commit that makes it possible to use the |
|
I noticed that |
|
That Kind determines the scanout state (if the element should go on the Cursor plane, etc.) so it's orthogonal to is_framebuffer_effect. |
Ah, that makes sense. The name Kind misled me a bit at first. |
cmeissl
left a comment
There was a problem hiding this comment.
Just a few nits from my side
c1415d6 to
51dd856
Compare
|
@cmeissl Should all be addressed. Would appreciate a quick sanity check on the new commit before merging. |
|
For the "Allow querying RenderElementStates with namespace" commit, is the idea for the compositor to keep track of and pick the "preferred" namespace for updating from render element states? I don't see a way to tell it "find element regardless of namespace", as far as I can tell passing So multiple unnamespaced clones will find "some" (undefined) single clone, but with a namespace you always have to pass the correct namespace (or correct absence of one), is that correct? |
This is an attempt add changing
Element/RenderElementalongside the damage algorithms to better support effects on the underlying framebuffer, more specifically blurring the background. But similar effects are imaginable and this tries to be generic towards whatever future versions ofext-background-effects-v1might allow.An implementation of a
BlurElement, that makes use of this code, can be found here: https://github.com/pop-os/cosmic-comp/blob/frosted-glass_noble/src/backend/render/blur.rs#L206To facilitate this, we need two things:
Facilitating the latter is easy.
Elementgets a newis_framebuffer_effectmethod. We use that for both "damaging the whole element" and figuring out, if we need the more expensive logic in the first place.For the former, I introduced a
damage_indexto ourOutputDamageTrackerper element. Since we iterate through the elements top-to-bottom, we can record the index of all damage collected so far when we hit a new element to have a clean cut-off of damage above and below our element. We can then use that later to figure out, if any damage below the object occurred, which promptscaptureto be called.