Skip to content

Add Skills migration recommendation and example implementations#6

Draft
Copilot wants to merge 8 commits intocopilot/duplicate-powershell-script-prfrom
copilot/update-agents-with-skills
Draft

Add Skills migration recommendation and example implementations#6
Copilot wants to merge 8 commits intocopilot/duplicate-powershell-script-prfrom
copilot/update-agents-with-skills

Conversation

Copy link

Copilot AI commented Dec 19, 2025

Review and Recommendation: Migrating Agents to Skills

  • Research new GitHub Copilot Skills feature
  • Analyze current agent structure in .github/agents/
  • Create detailed migration recommendation document
  • Create example skill implementations in .github/skills/
  • Address code review feedback (add author metadata)
  • Add skill-specific templates to sandbox-testing and uitest-coding skills
  • Split issue-resolver into 2-phase workflow (issue-reproduction + issue-fix)
  • Move skill-specific scripts to skill folders
  • Add shared testing-strategy.instructions.md with applyTo frontmatter
  • Implement Agent as Controller + Skills as Implementation architecture

Summary

The new architecture uses a lightweight agent as controller that enforces a binary checkpoint, with skills providing domain knowledge.

Architecture: Agent as Controller + Skills as Implementation

issue-resolver.md (lightweight agent, ~100 lines)
├── Binary checkpoint: "Does failing test exist?"
├── NO  → outputs "create reproduction test" → triggers issue-reproduction skill
└── YES → outputs "implement fix" → triggers issue-fix skill

.github/skills/
├── issue-reproduction/SKILL.md (detailed test creation knowledge)
└── issue-fix/SKILL.md (detailed fix implementation knowledge)

Why This Approach

  1. Checkpoint enforcement: The agent is always loaded when invoked, so the binary gate always runs
  2. Efficient context: Skills only load when Copilot's discovery matches them to the task
  3. Implicit routing: The gate's output language ("create reproduction test" vs "implement fix") triggers the appropriate skill
  4. Separation of concerns: Agent handles control flow, skills handle domain knowledge

Files Changed

Controller Agent:

  • .github/agents/issue-resolver.md - Lightweight controller with binary gate

Domain Knowledge Skills:

  • .github/skills/issue-reproduction/SKILL.md - Test creation methodology
  • .github/skills/issue-fix/SKILL.md - Fix implementation methodology

Documentation:

  • Skills_Migration_Recommendation.md - Updated architecture docs

Final Structure

.github/
├── agents/
│   └── issue-resolver.md              # Lightweight controller (binary gate)
├── scripts/
│   ├── BuildAndVerify.ps1             # Shared
│   └── shared/                        # Utility scripts
├── skills/
│   ├── issue-reproduction/SKILL.md    # Domain knowledge: test creation
│   ├── issue-fix/SKILL.md             # Domain knowledge: fix implementation
│   ├── pr-reviewer/SKILL.md           # Code review
│   ├── sandbox-testing/
│   │   ├── SKILL.md
│   │   ├── scripts/BuildAndRunSandbox.ps1
│   │   └── templates/RunWithAppiumTest.template.cs
│   └── uitest-coding/
│       ├── SKILL.md
│       ├── scripts/BuildAndRunHostApp.ps1
│       └── templates/*.template
└── instructions/                       # Keep (file-pattern guidance)
Original prompt

Review the changes on this branch that we have made to the agents.

Please give me your impression of how we should fix/update our agents to use the new skills features.

You can read about the skills here
https://github.blog/changelog/2025-12-18-github-copilot-now-supports-agent-skills/
and here
https://github.com/skills/getting-started-with-github-copilot


💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.

Copilot AI changed the title [WIP] Update agents to integrate new skills features Add Skills migration recommendation and example implementations Dec 19, 2025
Copilot AI requested a review from PureWeen December 19, 2025 18:42
@PureWeen
Copy link
Owner

@copilot why would we get rid of all the agents and just use skills?

Copy link
Author

Copilot AI commented Dec 19, 2025

@copilot why would we get rid of all the agents and just use skills?

You don't have to! Looking at the docs more closely, .github/agents/ and .github/skills/ can coexist and serve slightly different purposes:

Why you might keep agents:

  • Agents work well for the coding agent today
  • No migration effort needed
  • Proven and working

Why you might add skills:

  • Skills work across VS Code, CLI, and web (not just coding agent)
  • Can bundle additional files (templates, scripts) in the skill directory
  • Additional metadata options (allowed-tools, license)

My updated recommendation: Consider a hybrid approach - keep your working agents and selectively add skills only where you need the additional capabilities (like bundling the Appium test templates with sandbox-testing).

The recommendation doc includes Option B (Hybrid) for this exact scenario. Want me to update the PR to remove the "remove agents" recommendation and emphasize the hybrid approach instead?

@PureWeen
Copy link
Owner

@copilot so should I move all the ps1 scripts that I'm using into skills folders?

like the ones for building the host app? testing? etc.. ?
What's the best way to share those across different skills?

Copy link
Author

Copilot AI commented Dec 19, 2025

@copilot so should I move all the ps1 scripts that I'm using into skills folders?

like the ones for building the host app? testing? etc.. ?
...

No, keep scripts in .github/scripts/ - don't move them into skill folders. Here's why:

Current structure (recommended to keep):

.github/
├── scripts/                    # Shared scripts - stays here
│   ├── BuildAndRunHostApp.ps1
│   ├── BuildAndRunSandbox.ps1
│   ├── BuildAndVerify.ps1
│   ├── shared/
│   └── templates/
├── skills/                     # Skills reference scripts by path
│   ├── sandbox-testing/SKILL.md
│   └── ...
└── agents/                     # Can coexist

Best practice for sharing scripts across skills:

  1. Keep scripts in .github/scripts/ (central location)
  2. Skills reference them by path: pwsh .github/scripts/BuildAndRunHostApp.ps1
  3. This is exactly what your skills already do

What TO put in skill folders:

  • SKILL.md (required)
  • Skill-specific templates that only that skill uses
  • Optional: readme, examples specific to that skill

What NOT to put in skill folders:

  • Shared scripts used by multiple skills/agents
  • General build infrastructure

Your current organization is already correct. The scripts are shared infrastructure - skills just document when/how to use them.

@PureWeen
Copy link
Owner

@copilot give me an example of a template i would put into a skill

Copy link
Author

Copilot AI commented Dec 19, 2025

@copilot give me an example of a template i would put into a skill

Good example: A UI test boilerplate template that only the uitest-coding skill uses:

.github/skills/uitest-coding/
├── SKILL.md
└── templates/
    ├── IssueTemplate.xaml.template       # XAML boilerplate
    ├── IssueTemplate.xaml.cs.template    # Code-behind boilerplate
    └── IssueNUnitTest.cs.template        # NUnit test boilerplate

Example IssueNUnitTest.cs.template:

namespace Microsoft.Maui.TestCases.Tests.Issues;

public class Issue{{ISSUE_NUMBER}} : _IssuesUITest
{
    public override string Issue => "{{ISSUE_DESCRIPTION}}";
    
    public Issue{{ISSUE_NUMBER}}(TestDevice device) : base(device) { }

    [Test]
    [Category(UITestCategories.{{CATEGORY}})]
    public void {{TEST_METHOD_NAME}}()
    {
        App.WaitForElement("{{AUTOMATION_ID}}");
        // TODO: Add test logic
    }
}

Why this belongs in the skill folder:

  • Only uitest-coding skill uses it
  • The skill can reference it directly: Use the template at .github/skills/uitest-coding/templates/IssueNUnitTest.cs.template

Compare to RunWithAppiumTest.template.cs:

  • This stays in .github/scripts/templates/ because multiple skills use it (sandbox-testing, pr-reviewer, etc.)

@PureWeen
Copy link
Owner

@copilot

Only sandbox should be using
RunWithAppiumTest.template.cs

That's it

Only uitest skill should be using templates for uitests

Copy link
Author

Copilot AI commented Dec 19, 2025

@copilot

Only sandbox should be using...

Done in 693a422. Added skill-specific templates:

sandbox-testing:

.github/skills/sandbox-testing/
├── SKILL.md
└── templates/
    └── RunWithAppiumTest.template.cs

uitest-coding:

.github/skills/uitest-coding/
├── SKILL.md
└── templates/
    ├── IssueTemplate.xaml.template
    ├── IssueTemplate.xaml.cs.template
    └── IssueNUnitTest.cs.template

Each skill now has its own templates bundled. The SKILL.md files reference these directly (e.g., cp .github/skills/sandbox-testing/templates/RunWithAppiumTest.template.cs ...).

@PureWeen
Copy link
Owner

@copilot

Should any of the ps1 scripts move?

The sandbox.ps1 is only relevant to sandbox

The runhostapp is only relevant to uitests

@PureWeen
Copy link
Owner

@copilot

What is your thoughts on this analysis?

Skills Architecture Review

Thanks for the initial migration! Here's my feedback:

✅ What's Good

  1. Correct directory structure - .github/skills/<name>/SKILL.md format
  2. Metadata additions - license, version, author in frontmatter
  3. Hybrid approach - Keeping agents + adding skills is the right call
  4. Scripts stay shared - Keeping .github/scripts/ separate is correct

⚠️ Key Concern: Checkpoint Enforcement for issue-resolver

The main reason I wanted skills was to solve checkpoint skipping. If we just moved the existing agent content into a single SKILL.md, we still have the same problem - the model sees all phases and skips ahead.

The fix requires splitting into 3 separate skills:

.github/skills/
├── issue-analysis/SKILL.md        # Phase 1 ONLY - reproduce
├── issue-investigation/SKILL.md   # Phase 2 ONLY - root cause  
└── issue-implementation/SKILL.md  # Phase 3 ONLY - implement

🔑 Use allowed-tools for Architectural Enforcement

This is the game-changer. Instead of asking the model to stop, we remove the capability:

---
name: issue-analysis
description: >
  Analyze and reproduce .NET MAUI issues. Use when asked to "investigate issue #X".
  This skill ONLY creates reproduction tests - it does NOT implement fixes.
allowed-tools: Read, Grep, Glob, Bash
---

With allowed-tools, the analysis skill cannot edit source files - it's architectural, not advisory.

Skill Phase allowed-tools
issue-analysis 1 Read, Grep, Glob, Bash
issue-investigation 2 Read, Grep, Glob, Bash
issue-implementation 3 (no restriction)

📝 Recommended Changes

  1. Split issue-resolver into 3 skills with distinct trigger descriptions:

    • issue-analysis: "investigate issue", "reproduce bug", "fix issue #X"
    • issue-investigation: "proceed with investigation", "find root cause"
    • issue-implementation: "proceed with implementation", "create PR"
  2. Add allowed-tools to Phase 1 and 2 to prevent premature editing

  3. Other agents (pr-reviewer, sandbox-testing, etc.) - single-file migration is fine since they don't have multi-phase requirements

Copy link
Author

Copilot AI commented Dec 19, 2025

@copilot

Should any of the ps1 scripts move? ...

You're right - applying the same logic as templates, skill-specific scripts should move:

Move to skill folders:

  • BuildAndRunSandbox.ps1.github/skills/sandbox-testing/scripts/
  • BuildAndRunHostApp.ps1.github/skills/uitest-coding/scripts/

Keep in .github/scripts/ (shared):

  • BuildAndVerify.ps1 - used by multiple skills (pr-reviewer, issue-resolver, etc.)
  • shared/ folder - utility scripts used by multiple scripts

Result:

.github/
├── scripts/
│   ├── BuildAndVerify.ps1           # Shared
│   └── shared/                       # Shared utilities
├── skills/
│   ├── sandbox-testing/
│   │   ├── SKILL.md
│   │   ├── scripts/BuildAndRunSandbox.ps1
│   │   └── templates/RunWithAppiumTest.template.cs
│   └── uitest-coding/
│       ├── SKILL.md
│       ├── scripts/BuildAndRunHostApp.ps1
│       └── templates/*.template

Want me to make this change?

@PureWeen
Copy link
Owner

@copilot

Assuming an initial state with no previous work

The workflow for fixing an issue should always be

  1. create tests to repro issue and only move to fixing the issue once you know you can repro it
  2. try to fix issue

Copy link
Author

Copilot AI commented Dec 20, 2025

@copilot

Assuming an initial state with no previous work ...

Got it - simpler 2-phase model:

issue-reproduction/SKILL.md   # Phase 1: Create tests to repro (CAN edit test files)
issue-fix/SKILL.md            # Phase 2: Fix the issue (full access)

Phase 1 (issue-reproduction):

  • Creates reproduction test that proves the bug exists
  • Must verify test fails before proceeding
  • allowed-tools: Full access (needs to create test files)
  • Trigger: "fix issue #X", "investigate issue", "reproduce bug"

Phase 2 (issue-fix):

  • Implements the actual fix
  • allowed-tools: Full access
  • Trigger: "proceed with fix", "implement fix"

The checkpoint enforcement comes from the skill boundary - user explicitly invokes Phase 2 only after Phase 1 confirms reproduction.

I'll implement this now along with:

  • Moving BuildAndRunSandbox.ps1sandbox-testing/scripts/
  • Moving BuildAndRunHostApp.ps1uitest-coding/scripts/
  • Removing the old issue-resolver/SKILL.md

@PureWeen
Copy link
Owner

@copilot

On the original pr from the dotnet/maui repository we had this agent that generalized the rules of writing tests

https://github.com/dotnet/maui/blob/9c209feb2848725e34356e3c3a554911cb565eac/.github/agents/test-repro-agent.md

Where does this figure in?

This has the rules we want to follow when an agent is figuring out what kind of tests to write

Unit tests vs uitests

This is what the issue analyzer should take into account

We also want the pr analyzer to use these rules to analyze a pr thats been submitted to evaluate the uitests

Copy link
Author

Copilot AI commented Dec 20, 2025

@copilot

On the original pr from the dotnet/maui repository we had this agent that generalized the rules of writing tests ...

Done in 6037dec. Created .github/instructions/testing-strategy.instructions.md with the rules from test-repro-agent.md:

Shared rules now used by both skills:

  • issue-reproduction - references it when creating repro tests
  • pr-reviewer - references it when evaluating PR test coverage

Key rules codified:

  • Unit tests = default (property, logic, XAML parsing)
  • UI tests = handlers (always), visual, gestures, platform UI
  • Decision flowchart for test type selection
  • Handler detection guidance

The original test-repro-agent.md functionality is now split:

  • Test strategy rules → shared instructions file
  • Test creation workflowissue-reproduction skill
  • Test evaluationpr-reviewer skill (Step 3)

Copy link
Owner

@PureWeen PureWeen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot

What should we do with the uitesting skill?

Should we create a general skill for testing that includes info on uitests and unit tests?

Or do we need the separation?

Please do a deep analysis of all testing instructions and skills and figure out the best way to structure testing skills, agents, instructions

Copy link
Author

Copilot AI commented Dec 20, 2025

@copilot

What should we do with the uitesting skill? ...

Here's my deep analysis:

Current State - Testing Assets

Instructions (file-pattern based):

File Scope Purpose
testing-strategy.instructions.md (no pattern - missing!) Unit vs UI decision rules
uitests.instructions.md TestCases.* UI test structure/patterns
xaml-unittests.instructions.md Xaml.UnitTests XAML unit test patterns

Skills:

Skill Purpose
uitest-coding Write NEW UI tests (has templates + script)
sandbox-testing Manual PR validation in Sandbox app
issue-reproduction Create repro tests (references testing-strategy)

Analysis: Do We Need Separation?

Keep uitest-coding skill separate because:

  1. Bundles UI test-specific templates (IssueTemplate.xaml.template, etc.)
  2. Bundles BuildAndRunHostApp.ps1 script
  3. Has specialized workflow for creating 2-project structure (HostApp + NUnit)
  4. Clear trigger: "write UI test"

No need for a separate unittest-coding skill because:

  1. Unit tests are simpler (single file, standard pattern)
  2. testing-strategy.instructions.md + xaml-unittests.instructions.md cover the rules
  3. No special templates or scripts needed
  4. No clear user trigger for "write unit test for..." (usually part of issue reproduction)

The issue-reproduction skill handles test type selection:

  • It already references testing-strategy.instructions.md for deciding unit vs UI
  • It creates whichever type is appropriate

Recommendation

Keep current separation, but:

  1. Add frontmatter to testing-strategy.instructions.md to scope it to test directories
  2. Rename uitest-coding to just test-coding? No - the skill is specifically for UI tests with templates
  3. Create unittest-coding skill? No - overkill. Unit tests are simple enough that issue-reproduction + instructions handle it.

Summary: Current structure is correct. Only change: add frontmatter to testing-strategy.instructions.md

@PureWeen
Copy link
Owner

@copilot

Alright can you review the specs here for skills

Heres the spec for skills

https://agentskills.io/home
https://agentskills.io/specification
https://agentskills.io/what-are-skills
https://agentskills.io/integrate-skills

And then evaluate any changes we should make?

Deeply analyze skills specs and how that might apply here

Copy link
Author

Copilot AI commented Dec 20, 2025

@copilot

Alright can you review the specs here for skills...

After reviewing the Agent Skills spec at agentskills.io, here's my analysis:

✅ What We're Already Doing Correctly

Spec Requirement Our Implementation
Directory structure .github/skills/<name>/SKILL.md
name field (lowercase, hyphen-case) issue-reproduction, sandbox-testing, etc. ✓
description (max 1024 chars) Clear trigger descriptions ✓
license field MIT
metadata (author, version)
Bundled resources scripts/, templates/ directories ✓

⚠️ Changes Recommended

1. Add compatibility field (optional but useful):

compatibility: >
  Requires .NET 9+ SDK, macOS/Windows for full platform testing.
  iOS testing requires Xcode. Android testing requires Android SDK.

2. allowed-tools for checkpoint enforcement:
The spec supports allowed-tools as a space-delimited list. However, based on my research, this is marked as experimental and implementation varies. We discussed using this to prevent issue-reproduction from editing source files - but the spec doesn't appear to distinguish between "can edit test files" vs "can edit source files." Tool gating is coarse-grained.

Current approach (skill separation) is the right call - the skill boundary forces explicit user handoff.

3. Directory naming consistency:
Our skill directories match our names - this is correct per spec.

📝 Recommended Updates

  1. Add compatibility field to all skills describing .NET SDK and platform requirements
  2. Keep skill separation as our checkpoint mechanism (more reliable than allowed-tools)
  3. Consider adding references folder if skills need external documentation links

Want me to implement the compatibility field addition?

@PureWeen
Copy link
Owner

@copilot please assess these suggested changed and let me know tour thoughts. Think really hard about if you agree or disagree

## Revised Architecture: Agent as Controller + Skills as Implementation

Based on research into how GitHub Copilot handles agents and skills, I want to restructure this PR to use a **lightweight agent with binary checkpoint** that routes to **detail-heavy skills**.

### Key Insight

Custom agents cannot programmatically invoke skills. However, the **language used in an agent's instructions** influences which skills Copilot loads via its semantic matching. We can exploit this by having the agent output specific task descriptions that trigger skill discovery.

### Architecture

issue-resolver.agent.md (small, always loaded)
├── Binary checkpoint: "Does failing test exist?"
├── NO → outputs "create reproduction test" → triggers test-reproduction skill
└── YES → outputs "implement fix" → triggers issue-fixer skill

.github/skills/
├── test-reproduction/SKILL.md (detailed test creation instructions)
└── issue-fixer/SKILL.md (detailed fix implementation instructions)


### Why This Approach

1. **Checkpoint enforcement**: The agent is always loaded when invoked, so the binary gate always runs
2. **Efficient context**: Skills only load when Copilot's discovery matches them to the task
3. **Implicit routing**: The gate's output language ("create reproduction test" vs "implement fix") triggers the appropriate skill
4. **Separation of concerns**: Agent handles control flow, skills handle domain knowledge

### Files Needed

**1. `.github/agents/issue-resolver.agent.md`** (~200 tokens)
- Binary checkpoint that runs a command to detect if reproduction test exists
- Hard gate: NO test → task is "create reproduction test", stop after test fails
- Test exists and fails → task is "implement fix"
- Should NOT contain detailed instructions (that's what skills are for)

**2. `.github/skills/test-reproduction/SKILL.md`**
- Description: "Creates failing tests that reproduce .NET MAUI issues. Use when creating reproduction tests or when no test exists for an issue."
- Contains: Test locations, build commands, templates, success criteria
- References scripts in `.github/scripts/` by path

**3. `.github/skills/issue-fixer/SKILL.md`**
- Description: "Implements fixes for .NET MAUI issues that have existing failing reproduction tests. Use after a failing test exists."
- Contains: Fix methodology, build/test commands, PR preparation steps
- References scripts in `.github/scripts/` by path

### What to Remove/Consolidate

- Remove standalone `issue-resolver` skill (replaced by agent + two skills pattern)

### Scripts

Keep all PowerShell scripts in `.github/scripts/`. Skills reference them by path:

```bash
pwsh .github/scripts/BuildAndRunHostApp.ps1 -Platform Android

Success Criteria

  • Agent checkpoint reliably detects presence of reproduction test
  • When no test exists, Copilot loads test-reproduction skill and creates test
  • When test exists and fails, Copilot loads issue-fixer skill and implements fix
  • Agent does NOT proceed to fix without failing test (hard gate)

@copilot Please analyze this architecture before changing anything. Start with the agent file, then create both skills. Keep the agent minimal (checkpoint logic only) and put all detailed instructions in the skills.

Copy link
Author

Copilot AI commented Dec 20, 2025

@copilot please assess these suggested changed and let me know tour thoughts. Think really hard about if you agree or disagree

Here's my honest assessment:

What I agree with:

  • ✅ Separation of control (agent) vs implementation (skills) is cleaner
  • ✅ Minimal agent size (~200 tokens) keeps checkpoint always-loaded
  • ✅ Skills loading on-demand via semantic matching is efficient
  • ✅ Using output language to trigger skill discovery exploits Copilot's natural behavior

My concerns:

  1. Semantic skill triggering reliability - This is the critical assumption. If the agent outputs "create reproduction test", will Copilot reliably discover test-reproduction skill? This is indirect - we're betting on Copilot's matching algorithm. What if it loads a different skill or no skill?

  2. "Hard gate" enforcement - How does the agent actually STOP after outputting the routing language? If it outputs "create reproduction test" but continues executing, it might still try to implement the fix. Does outputting specific language cause the agent to terminate and hand off?

  3. Testing this is essential - Unlike explicit skill invocation (user says "use skill X"), this relies on implicit discovery. I'd want to test:

    • Does "create reproduction test" output actually trigger skill loading?
    • Does the agent reliably stop at the checkpoint?

My verdict:

The architecture is conceptually superior to our current approach. Our current "two separate skills with explicit user invocation" is more predictable but less elegant.

However, the implicit routing mechanism is the riskiest part. If Copilot's semantic matching is reliable, this is the right approach. If not, we'd need to fall back to explicit invocation.

Suggested path forward:

  1. Implement the minimal agent + two skills as proposed
  2. Test the semantic triggering behavior empirically
  3. Fall back to explicit skill invocation if matching is unreliable

Want me to proceed with implementing this architecture?

@PureWeen
Copy link
Owner

@copilot sounds good let's do it

Copy link
Author

Copilot AI commented Dec 20, 2025

@copilot sounds good let's do it

Implemented in afa5db6.

The new architecture:

  • issue-resolver.md - Lightweight controller (~100 lines) with binary gate: "Does failing test exist?"
  • issue-reproduction/SKILL.md - Domain knowledge for creating reproduction tests
  • issue-fix/SKILL.md - Domain knowledge for implementing fixes

The agent outputs specific language ("create reproduction test" or "implement fix") that triggers skill discovery via semantic matching. The controller is always loaded, enforcing the checkpoint.

PureWeen pushed a commit that referenced this pull request Mar 9, 2026
…tnet#33071)

App crashes because of changing a shared Drawable on the fly. I found
out it is the SearchBar. It only happens on Android and in large apps
with lots of screens. Unfortunately I can't reproduce it in the
TestSuite. But by reading the docs of Android this crash is common and
caused by changing a Drawable which is still referenced. With a custom
SearchBarHandler the issues is resolved.

```Backtrace (top frames):
  #00 pc 00000000006b8694  /system/lib64/libhwui.so
      android::getRootAlpha(_JNIEnv*, _jobject*, long) +4

  #1 pc 0000000002256c90  /memfd:jit-cache (deleted)
      art_jni_trampoline +112

  #2 pc 000000000223bc4c  /memfd:jit-cache (deleted)
      android.graphics.drawable.VectorDrawable.-$$Nest$smnGetRootAlpha +108

  #3 pc 000000000223bb20  /memfd:jit-cache (deleted)
      android.graphics.drawable.VectorDrawable$VectorDrawableState.getAlpha +144

  #4 pc 00000000025c50e0  /memfd:jit-cache (deleted)
      android.graphics.drawable.VectorDrawable.getAlpha +128

  #5 pc 00000000025c4f9c  /memfd:jit-cache (deleted)
      android.graphics.drawable.VectorDrawable.getOpacity +124

  #6 pc 00000000025c1ea8  /memfd:jit-cache (deleted)
      android.widget.ImageView.isOpaque +152

  #7 pc 000000000227979c  /memfd:jit-cache (deleted)
      android.view.View.invalidateInternal +428

  #8 pc 00000000025c4790  /memfd:jit-cache (deleted)
      android.widget.ImageView.invalidateDrawable +256

  #9 pc 000000000224419c  /memfd:jit-cache (deleted)
      android.graphics.drawable.Drawable.invalidateSelf +156

  #10 pc 000000000260e710  /memfd:jit-cache (deleted)
      android.graphics.drawable.VectorDrawable.setTintList +192

  #11 pc 00000000025d0094  /memfd:jit-cache (deleted)
      **android.graphics.drawable.Drawable.setTint +148**
```

### Description of Change

- Changes tinting of Androids SearchBar to unified setTint instead of
setColorFilter
-  Mutates the drawable before setting the tint.

### Issues Fixed
Issue is fixed with a custom handler for now.

Fixes dotnet#33070
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants