-
Notifications
You must be signed in to change notification settings - Fork 93
Jessie/video watermarking #259
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
JessieeeNotLi
wants to merge
72
commits into
spcl:master
Choose a base branch
from
McLavish:jessie/video_watermarking
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from 6 commits
Commits
Show all changes
72 commits
Select commit
Hold shift + click to select a range
550cc8c
added bert as a test inference benchmark
McLavish f9c3817
hotfix to enable gpu capabilities
McLavish 0f93b66
added pre-commit hooks for linting and formatting
McLavish b965d7b
linting and formatting setting for whoever uses vscode + black + flak…
McLavish e9916db
reformatted local file so it passes linting/format
McLavish 2b75311
Merge branch 'development' into feature/bert-inference
McLavish 813af03
bert now uses gpu
McLavish 3a96f04
changed data repo to be OUR forked data repo
McLavish d4d5d30
change data loading path to own forked repo
Russellpang 1b7deb7
change data loading path to own forked repo
Russellpang 668652c
update benchmark function
Russellpang aae1023
fix: replaced onnxruntime requirement from CPU to GPU. now it actuall…
McLavish 25fd1d9
circleci mypy fix?
McLavish c478c91
Merge pull request #2 from McLavish/feature/bert-inference
McLavish d6c4227
benchmarks is now flake8/black compliant. pre-commit hooks also check…
McLavish 27b14d6
add linalg benchmarks
Russellpang ace2335
add linalg benchmarks
Russellpang ad3023d
changed CI/CD to run linting on the benchmarks folder ONLY. disabled …
McLavish 52f30c0
fix typo
Russellpang 4efff4d
update code
Russellpang f8577e7
Create .gitkeep
JessieeeNotLi dfaa14a
watermarking GPU benchmark files
JessieeeNotLi 7f4f6c9
added run benchmark script
JessieeeNotLi 1653b7c
Add usage instructions to read.me
JessieeeNotLi f5e7ab7
watermarking_readme.md
JessieeeNotLi aa3483f
Update NVENC benchmark instructions in README
JessieeeNotLi adf54a5
migrated from CircleCI to Github Actions
McLavish 67772e2
fixed workflow directory
McLavish 8f02b66
pip dependencies take too long
McLavish ae61e4b
Merge pull request #8 from McLavish/hotfix/code-quality-on-benchmarks
McLavish e06985c
new benchmark data
McLavish 037f6c3
Bring folder from other-branch
377d949
update code
8dd8a6e
modify code and requirements
fa7e76e
Create .gitkeep
JessieeeNotLi 51713f4
Add Dockerfile for NVENC-enabled FFmpeg image
JessieeeNotLi e0cfbdc
Add run script for video watermarking benchmark
JessieeeNotLi de15075
unfinished new fuc
Russellpang f534a53
Add files via upload
JessieeeNotLi 3006879
add new functions
Russellpang 7057465
Update run.sh
JessieeeNotLi 44c8bcb
Update run.sh
JessieeeNotLi e53cfde
added gpu benchmark, and test results on CPU:wq
d224ddc
add new functions
Russellpang 921f321
added recommender benchmark
McLavish dd840d1
Merge branch 'development' into feature/russell
YuxuanLiu-kayla 4fca4aa
changed data submodule to use ssh and not https
McLavish 26dfcf4
add channel_flow, compute, fft, and resnet of jax_npbench
down-street fad77da
reset the config
down-street e995e6a
Merge pull request #13 from McLavish/jiahao/npbenchs
down-street 7e0d13f
microbenchmark example
942f5a1
Remove SSH public key from eval command
Russellpang 6bc1dd7
Remove local_deployment.json configuration
Russellpang 460ea1f
Delete out_storage.json configuration file
Russellpang de41ab6
Remove SSH private key from eval command
Russellpang ded520f
remove garbage
5c85980
test
Russellpang c5782dd
test
Russellpang 2b52ced
test
Russellpang e5cb20c
Merge branch 'development' into feature/russell
Russellpang 6488d6d
remove unnecessay files
Russellpang 55c4ac4
fuck you
Russellpang b97b7a5
Refactor argument parsing for cleaner syntax
Russellpang 1998b6b
Change 'reps' to 'iters' in jacobi2d function
Russellpang 2cbd768
Delete benchmarks/000.microbenchmarks/050.matmul directory
Russellpang 074d4b7
Merge pull request #6 from McLavish/feature/russell
McLavish efced9c
Revert "changed data submodule to use ssh and not https"
McLavish bc48b5e
fix: missing config.json
McLavish e154ba0
Merge branch 'development' into feature/inference-recommender
McLavish d9ed506
Merge pull request #11 from McLavish/feature/inference-recommender
McLavish 703a05d
adapted watermarking benchmark to sebs structure
JessieeeNotLi f5093a8
Merge branch 'development' into jessie/video_watermarking
JessieeeNotLi File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1 @@ | ||
|
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,293 @@ | ||
| #!/usr/bin/env python3 | ||
| import argparse, datetime, json, os, re, shutil, subprocess, sys, tempfile, csv | ||
| from typing import List, Dict, Any, Optional, Tuple | ||
|
|
||
| # --- helpers --------------------------------------------------------------- | ||
|
|
||
| def which_ffmpeg() -> str: | ||
| p = shutil.which("ffmpeg") | ||
| if not p: | ||
| sys.exit("ffmpeg not found on PATH. Use Docker image with NVENC or install FFmpeg with NVENC.") | ||
| return p | ||
|
|
||
| def run(cmd: List[str]) -> subprocess.CompletedProcess: | ||
| return subprocess.run(cmd, stdin=subprocess.DEVNULL, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, text=True) | ||
|
|
||
| def has_encoder(ffmpeg: str, enc: str) -> bool: | ||
| out = run([ffmpeg, "-hide_banner", "-encoders"]).stdout | ||
| return re.search(rf"\b{re.escape(enc)}\b", out) is not None | ||
|
|
||
| def has_filter(ffmpeg: str, name: str) -> bool: | ||
| out = run([ffmpeg, "-hide_banner", "-filters"]).stdout | ||
| return (f" {name} " in out) | ||
|
|
||
| def gpu_info() -> Dict[str, Any]: | ||
| try: | ||
| out = run(["nvidia-smi", "--query-gpu=name,memory.total,driver_version", "--format=csv,noheader,nounits"]).stdout.strip() | ||
| name, mem, drv = [x.strip() for x in out.splitlines()[0].split(",")] | ||
| return {"name": name, "memory_total_mb": int(mem), "driver_version": drv} | ||
| except Exception: | ||
| return {"name": None, "memory_total_mb": None, "driver_version": None} | ||
|
|
||
| def parse_progress(log: str) -> Dict[str, Any]: | ||
| lines = [ln for ln in log.splitlines() if ("fps=" in ln or "speed=" in ln or "frame=" in ln)] | ||
| fps = speed = frames = None | ||
| if lines: | ||
| last = lines[-1] | ||
| m = re.search(r"fps=\s*([0-9]+(?:\.[0-9]+)?)", last); fps = float(m.group(1)) if m else None | ||
| m = re.search(r"speed=\s*([0-9]+(?:\.[0-9]+)?)x", last); speed = float(m.group(1)) if m else None | ||
| m = re.search(r"frame=\s*([0-9]+)", last); frames = int(m.group(1)) if m else None | ||
| return {"fps": fps, "speed_x": speed, "frames": frames} | ||
|
|
||
| # --- filter planning ------------------------------------------------------- | ||
|
|
||
| def build_vf_or_complex( | ||
| ffmpeg: str, | ||
| scale: Optional[str], | ||
| wm_path: Optional[str], | ||
| overlay: str, | ||
| want_gpu_decode: bool | ||
| ) -> Tuple[List[str], str]: | ||
| """ | ||
| Returns (ffmpeg_args_for_filters, filter_used_string). | ||
|
|
||
| Priority: | ||
| - Prefer GPU filters: scale_npp, then scale_cuda, then CPU scale with explicit bridges. | ||
| - Prefer overlay_cuda; else CPU overlay with explicit bridges. | ||
| - Never place 'format=nv12' *after* 'hwupload_cuda'. | ||
| """ | ||
| used = [] | ||
| vf_args: List[str] = [] | ||
| complex_graph = "" | ||
|
|
||
| have_scale_npp = has_filter(ffmpeg, "scale_npp") | ||
| have_scale_cuda = has_filter(ffmpeg, "scale_cuda") | ||
| have_overlay_cuda= has_filter(ffmpeg, "overlay_cuda") | ||
|
|
||
| # No watermark case | ||
| if not wm_path: | ||
| if scale: | ||
| if want_gpu_decode and have_scale_npp: | ||
| vf_args = ["-vf", f"scale_npp={scale}"] | ||
| used.append("scale_npp") | ||
| elif want_gpu_decode and have_scale_cuda: | ||
| vf_args = ["-vf", f"scale_cuda={scale}"] | ||
| used.append("scale_cuda") | ||
| else: | ||
| # CPU scale with explicit bridges | ||
| # hw frames -> CPU: hwdownload,format=nv12 | ||
| # CPU scale -> back to GPU: hwupload_cuda | ||
| vf_args = ["-vf", f"hwdownload,format=nv12,scale={scale},hwupload_cuda"] | ||
| used.append("scale(cpu)+hwdownload+hwupload_cuda") | ||
| else: | ||
| vf_args = [] | ||
| return (vf_args, "+".join(used)) | ||
|
|
||
| # Watermark case | ||
| if want_gpu_decode and have_overlay_cuda: | ||
| if scale and have_scale_npp: | ||
| complex_graph = f"[0:v]scale_npp={scale}[v0];[v0][1:v]overlay_cuda={overlay}[vout]" | ||
| used += ["scale_npp","overlay_cuda"] | ||
| elif scale and have_scale_cuda: | ||
| complex_graph = f"[0:v]scale_cuda={scale}[v0];[v0][1:v]overlay_cuda={overlay}[vout]" | ||
| used += ["scale_cuda","overlay_cuda"] | ||
| elif scale: | ||
| complex_graph = ( | ||
| f"[0:v]hwdownload,format=nv12,scale={scale},hwupload_cuda[v0];" | ||
| f"[v0][1:v]overlay_cuda={overlay}[vout]" | ||
| ) | ||
| used += ["scale(cpu)+hwdownload+hwupload_cuda","overlay_cuda"] | ||
| else: | ||
| complex_graph = f"[0:v][1:v]overlay_cuda={overlay}[vout]" | ||
| used += ["overlay_cuda"] | ||
| return (["-filter_complex", complex_graph, "-map", "[vout]"], "+".join(used)) | ||
|
|
||
| # CPU overlay fallback | ||
| if scale and want_gpu_decode and (have_scale_npp or have_scale_cuda): | ||
| scaler = "scale_npp" if have_scale_npp else "scale_cuda" | ||
| complex_graph = ( | ||
| f"[0:v]{scaler}={scale}[v0gpu];" | ||
| f"[v0gpu]hwdownload,format=nv12[v0cpu];" | ||
| f"[v0cpu][1:v]overlay={overlay}[mix];" | ||
| f"[mix]hwupload_cuda[vout]" | ||
| ) | ||
| used += [scaler, "hwdownload+overlay(cpu)+hwupload_cuda"] | ||
| elif scale: | ||
| complex_graph = ( | ||
| f"[0:v]hwdownload,format=nv12,scale={scale}[v0cpu];" | ||
| f"[v0cpu][1:v]overlay={overlay}[mix];" | ||
| f"[mix]hwupload_cuda[vout]" | ||
| ) | ||
| used += ["scale(cpu)+overlay(cpu)+hwupload_cuda"] | ||
| else: | ||
| complex_graph = ( | ||
| f"[0:v]hwdownload,format=nv12[v0cpu];" | ||
| f"[v0cpu][1:v]overlay={overlay}[mix];" | ||
| f"[mix]hwupload_cuda[vout]" | ||
| ) | ||
| used += ["overlay(cpu)+hwupload_cuda"] | ||
|
|
||
| return (["-filter_complex", complex_graph, "-map", "[vout]"], "+".join(used)) | ||
|
|
||
| # --- core ------------------------------------------------------------------ | ||
|
|
||
| def transcode_once( | ||
| ffmpeg: str, | ||
| inp: str, | ||
| outp: str, | ||
| codec: str, | ||
| bitrate: str, | ||
| preset: str, | ||
| duration: Optional[float], | ||
| scale: Optional[str], | ||
| wm_path: Optional[str], | ||
| overlay_pos: str, | ||
| decode_mode: str = "gpu" # "gpu" or "cpu" | ||
| ) -> Dict[str, Any]: | ||
|
|
||
| if not has_encoder(ffmpeg, codec): | ||
| raise RuntimeError(f"encoder '{codec}' not available; check your ffmpeg build (NVENC/AV1).") | ||
|
|
||
| want_gpu_decode = (decode_mode == "gpu") | ||
|
|
||
| args = [ffmpeg, "-hide_banner", "-y", "-vsync", "0"] | ||
|
|
||
| if want_gpu_decode: | ||
| # Keep decode on GPU & use CUDA frames. Give NVDEC extra surfaces. | ||
| args += ["-hwaccel", "cuda", "-hwaccel_output_format", "cuda", "-extra_hw_frames", "16"] | ||
| # Helpful on some builds to make filters pick the right device | ||
| args += ["-init_hw_device", "cuda=cuda", "-filter_hw_device", "cuda"] | ||
|
|
||
| # inputs | ||
| args += ["-i", inp] | ||
| if wm_path: | ||
| args += ["-loop", "1", "-i", wm_path] | ||
|
|
||
| if duration: | ||
| args += ["-t", str(duration)] | ||
|
|
||
| # Build filters | ||
| filt_args, filter_used = build_vf_or_complex(ffmpeg, scale, wm_path, overlay_pos, want_gpu_decode) | ||
| args += filt_args | ||
|
|
||
| # encoder params | ||
| args += ["-c:v", codec, "-b:v", bitrate, "-preset", preset, "-rc", "vbr", "-movflags", "+faststart"] | ||
| # audio: copy if present | ||
| args += ["-c:a", "copy"] | ||
|
|
||
| # Output path | ||
| args += [outp] | ||
|
|
||
| t0 = datetime.datetime.now() | ||
| proc = run(args) | ||
| t1 = datetime.datetime.now() | ||
| if proc.returncode != 0: | ||
| raise RuntimeError("ffmpeg failed:\n" + proc.stdout + f"\n\nARGS:\n{' '.join(args)}") | ||
|
|
||
| parsed = parse_progress(proc.stdout) | ||
| size = os.path.getsize(outp) if os.path.exists(outp) else 0 | ||
| return { | ||
| "args": args, | ||
| "filter_used": filter_used, | ||
| "stdout_tail": "\n".join(proc.stdout.splitlines()[-15:]), | ||
| "compute_time_us": (t1 - t0) / datetime.timedelta(microseconds=1), | ||
| "fps": parsed["fps"], | ||
| "speed_x": parsed["speed_x"], | ||
| "frames": parsed["frames"], | ||
| "output_size_bytes": size | ||
| } | ||
|
|
||
| def main(): | ||
| ap = argparse.ArgumentParser(description="GPU NVENC benchmark.") | ||
| ap.add_argument("--input", required=True, help="Path to input video") | ||
| ap.add_argument("--duration", type=float, default=None, help="Trim to first N seconds") | ||
| ap.add_argument("--repeat", type=int, default=1, help="Repeat each trial") | ||
| ap.add_argument("--warmup", action="store_true", help="Run one warmup trial (not recorded)") | ||
| ap.add_argument("--csv", default=None, help="Optional path to write CSV summary") | ||
| ap.add_argument("--watermark", default=None, help="Path to watermark PNG (optional)") | ||
| ap.add_argument("--overlay", default="main_w/2-overlay_w/2:main_h/2-overlay_h/2", | ||
| help="Overlay position (ffmpeg expr), e.g. '10:10' or 'main_w-overlay_w-10:10'") | ||
| ap.add_argument("--decode", choices=["gpu","cpu"], default="gpu", | ||
| help="Decode on GPU (default) or CPU.") | ||
| ap.add_argument("--trials", nargs="+", default=[ | ||
| "codec=h264_nvenc,bitrate=5M,preset=p5", | ||
| "codec=h264_nvenc,bitrate=12M,preset=p1,scale=1920:1080", | ||
| "codec=hevc_nvenc,bitrate=6M,preset=p4", | ||
| "codec=av1_nvenc,bitrate=3M,preset=p5" | ||
| ], help="List like codec=h264_nvenc,bitrate=5M,preset=p5[,scale=WxH]") | ||
| args = ap.parse_args() | ||
|
|
||
| ffmpeg = which_ffmpeg() | ||
| gi = gpu_info() | ||
|
|
||
| def parse_trial(s: str) -> Dict[str, str]: | ||
| d: Dict[str, str] = {} | ||
| for kv in s.split(","): | ||
| k, v = kv.split("=", 1) | ||
| d[k.strip()] = v.strip() | ||
| return d | ||
|
|
||
| trial_specs = [parse_trial(s) for s in args.trials] | ||
|
|
||
| # optional warmup | ||
| if args.warmup: | ||
| with tempfile.NamedTemporaryFile(suffix=".mp4", delete=True) as tmp: | ||
| _ = transcode_once(ffmpeg, args.input, tmp.name, | ||
| trial_specs[0].get("codec","h264_nvenc"), | ||
| trial_specs[0].get("bitrate","5M"), | ||
| trial_specs[0].get("preset","p5"), | ||
| args.duration, | ||
| trial_specs[0].get("scale"), | ||
| args.watermark, | ||
| args.overlay, | ||
| args.decode) | ||
|
|
||
| results = [] | ||
| idx = 0 | ||
| for spec in trial_specs: | ||
| for _ in range(args.repeat): | ||
| with tempfile.NamedTemporaryFile(suffix=".mp4", delete=False) as tmp: | ||
| outp = tmp.name | ||
| res = transcode_once(ffmpeg, args.input, outp, | ||
| spec.get("codec","h264_nvenc"), | ||
| spec.get("bitrate","5M"), | ||
| spec.get("preset","p5"), | ||
| args.duration, | ||
| spec.get("scale"), | ||
| args.watermark, | ||
| args.overlay, | ||
| args.decode) | ||
| results.append({ | ||
| "trial_index": idx, | ||
| "codec": spec.get("codec"), | ||
| "bitrate": spec.get("bitrate"), | ||
| "preset": spec.get("preset"), | ||
| "scale_filter": res["filter_used"], | ||
| "fps": res["fps"], | ||
| "speed_x": res["speed_x"], | ||
| "frames": res["frames"], | ||
| "compute_time_us": res["compute_time_us"], | ||
| "output_size_bytes": res["output_size_bytes"], | ||
| "stdout_tail": res["stdout_tail"], | ||
| "argv": " ".join(res["args"]), | ||
| }) | ||
| idx += 1 | ||
| try: os.remove(outp) | ||
| except OSError: pass | ||
|
|
||
| report = { | ||
| "gpu": gi, | ||
| "ffmpeg_path": ffmpeg, | ||
| "trial_count": len(results), | ||
| "results": results | ||
| } | ||
| print(json.dumps(report, indent=2)) | ||
|
|
||
| if args.csv and results: | ||
| with open(args.csv, "w", newline="") as f: | ||
| w = csv.DictWriter(f, fieldnames=list(results[0].keys())) | ||
| w.writeheader() | ||
| w.writerows(results) | ||
|
|
||
| if __name__ == "__main__": | ||
| main() | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,3 @@ | ||
| chmod +x run_nvenc_bench.sh | ||
| ./run_nvenc_bench.sh # uses ~/bench/sample.mp4 (auto-creates) | ||
| ./run_nvenc_bench.sh /path/video.mp4 # use your own file |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,46 @@ | ||
| trial_index,codec,bitrate,preset,scale_filter,fps,speed_x,frames,compute_time_us,output_size_bytes,stdout_tail,argv | ||
| 0,h264_nvenc,5M,p5,,73.0,2.44,240,5879259.0,2272623," Side data: | ||
| cpb: bitrate max/min/avg: 0/0/5000000 buffer size: 10000000 vbv_delay: N/A | ||
| Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, mono, fltp, 69 kb/s (default) | ||
| Metadata: | ||
| handler_name : SoundHandler | ||
| vendor_id : [0][0][0][0] | ||
| frame= 1 fps=0.0 q=0.0 size= 0kB time=00:00:00.12 bitrate= 3.0kbits/s speed=1.44x | ||
| frame= 41 fps=0.0 q=22.0 size= 0kB time=00:00:01.47 bitrate= 0.3kbits/s speed=2.49x | ||
| frame= 81 fps= 74 q=12.0 size= 256kB time=00:00:02.81 bitrate= 744.9kbits/s speed=2.58x | ||
| frame= 121 fps= 76 q=12.0 size= 768kB time=00:00:04.13 bitrate=1520.3kbits/s speed=2.59x | ||
| frame= 161 fps= 77 q=12.0 size= 1024kB time=00:00:05.48 bitrate=1530.1kbits/s speed=2.61x | ||
| frame= 201 fps= 77 q=13.0 size= 1536kB time=00:00:06.80 bitrate=1849.0kbits/s speed=2.62x | ||
| [mp4 @ 0x601c5da3d280] Starting second pass: moving the moov atom to the beginning of the file | ||
| frame= 240 fps= 73 q=13.0 Lsize= 2219kB time=00:00:07.97 bitrate=2278.7kbits/s speed=2.44x | ||
| video:2142kB audio:68kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.409259%",/usr/bin/ffmpeg -hide_banner -y -vsync 0 -hwaccel cuda -hwaccel_output_format cuda -extra_hw_frames 16 -init_hw_device cuda=cuda -filter_hw_device cuda -i ./sample.mp4 -t 8.0 -c:v h264_nvenc -b:v 5M -preset p5 -rc vbr -movflags +faststart -c:a copy /tmp/tmpy5hxojjv.mp4 | ||
| 1,h264_nvenc,12M,p1,scale_cuda,191.0,6.34,240,3748632.0,3041922," handler_name : VideoHandler | ||
| vendor_id : [0][0][0][0] | ||
| encoder : Lavc58.134.100 h264_nvenc | ||
| Side data: | ||
| cpb: bitrate max/min/avg: 0/0/12000000 buffer size: 24000000 vbv_delay: N/A | ||
| Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, mono, fltp, 69 kb/s (default) | ||
| Metadata: | ||
| handler_name : SoundHandler | ||
| vendor_id : [0][0][0][0] | ||
| frame= 1 fps=0.0 q=0.0 size= 0kB time=00:00:00.12 bitrate= 3.0kbits/s speed=1.51x | ||
| frame= 102 fps=0.0 q=7.0 size= 768kB time=00:00:03.52 bitrate=1787.5kbits/s speed=5.93x | ||
| frame= 209 fps=191 q=7.0 size= 2304kB time=00:00:07.08 bitrate=2664.9kbits/s speed=6.46x | ||
| [mp4 @ 0x5c6c573cf740] Starting second pass: moving the moov atom to the beginning of the file | ||
| frame= 240 fps=191 q=7.0 Lsize= 2971kB time=00:00:07.97 bitrate=3050.1kbits/s speed=6.34x | ||
| video:2895kB audio:68kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.274427%",/usr/bin/ffmpeg -hide_banner -y -vsync 0 -hwaccel cuda -hwaccel_output_format cuda -extra_hw_frames 16 -init_hw_device cuda=cuda -filter_hw_device cuda -i ./sample.mp4 -t 8.0 -vf scale_cuda=1920:1080 -c:v h264_nvenc -b:v 12M -preset p1 -rc vbr -movflags +faststart -c:a copy /tmp/tmp68ay0l6q.mp4 | ||
| 2,hevc_nvenc,6M,p4,,101.0,3.37,240,4821593.0,2393406," encoder : Lavc58.134.100 hevc_nvenc | ||
| Side data: | ||
| cpb: bitrate max/min/avg: 0/0/6000000 buffer size: 12000000 vbv_delay: N/A | ||
| Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, mono, fltp, 69 kb/s (default) | ||
| Metadata: | ||
| handler_name : SoundHandler | ||
| vendor_id : [0][0][0][0] | ||
| frame= 1 fps=0.0 q=0.0 size= 0kB time=00:00:00.12 bitrate= 2.8kbits/s speed=1.18x | ||
| frame= 52 fps=0.0 q=17.0 size= 0kB time=00:00:01.83 bitrate= 0.2kbits/s speed=2.98x | ||
| frame= 110 fps= 98 q=12.0 size= 512kB time=00:00:03.77 bitrate=1110.9kbits/s speed=3.36x | ||
| frame= 168 fps=103 q=9.0 size= 1280kB time=00:00:05.71 bitrate=1834.1kbits/s speed=3.52x | ||
| frame= 226 fps=106 q=12.0 size= 1792kB time=00:00:07.63 bitrate=1922.2kbits/s speed=3.59x | ||
| [mp4 @ 0x62016db565c0] Starting second pass: moving the moov atom to the beginning of the file | ||
| frame= 240 fps=101 q=12.0 Lsize= 2337kB time=00:00:07.97 bitrate=2399.8kbits/s speed=3.37x | ||
| video:2260kB audio:68kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.392147%",/usr/bin/ffmpeg -hide_banner -y -vsync 0 -hwaccel cuda -hwaccel_output_format cuda -extra_hw_frames 16 -init_hw_device cuda=cuda -filter_hw_device cuda -i ./sample.mp4 -t 8.0 -c:v hevc_nvenc -b:v 6M -preset p4 -rc vbr -movflags +faststart -c:a copy /tmp/tmpkjy5g24f.mp4 |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Set executable permission on the file.
The shebang indicates this script should be directly executable, but the file permissions aren't set. Ensure
chmod +xis applied to match the usage shown in the documentation.Apply this fix:
🧰 Tools
🪛 Ruff (0.14.3)
1-1: Shebang is present but file is not executable
(EXE001)
🤖 Prompt for AI Agents