This is a repository with a collection of useful commands, scripts and examples for easy copy -> paste
- Clear memory cache
sync && echo 3 | sudo tee /proc/sys/vm/drop_caches- Create a self-signed SSL key and certificate
mkdir -p certs/my_com
openssl req -nodes -x509 -newkey rsa:4096 -keyout certs/my_com/my_com.key -out certs/my_com/my_com.crt -days 356 -subj "/C=US/ST=California/L=SantaClara/O=IT/CN=localhost"- Create binary files with random content
# Just one file (1mb)
dd if=/dev/urandom of=file bs=1024 count=1024
# Create 10 files of size ~10MB
for a in {0..9}; do \
echo ${a}; \
dd if=/dev/urandom of=file.${a} bs=10240 count=1024; \
done- Test connection to remote
host:port(check port being opened without usingnetcator other tools)
# Check if port 8080 is open on remote
bash -c "</dev/tcp/remote/8080" 2>/dev/null
[ $? -eq 0 ] && echo "Port 8080 on host 'remote' is open"- Suppress
Terminatedmessage from thekillon a background process by waiting for it withwaitand directing the stderr output to/dev/null. This is from in this stackoverflow answer.
# Call the kill command
kill ${PID}
wait $! 2>/dev/null- curl variables
Thecurlcommand has the ability to provide a lot of information about the transfer. See curl man page.
Search for--write-out.
See all supported variables in curl.format.txt
# Example for getting http response code (variable http_code)
curl -o /dev/null -s --write-out '%{http_code}' https://curl.haxx.se
# Example for one-liner printout of several connection time parameters
curl -w "\ndnslookup: %{time_namelookup} \nconnect: %{time_connect} \nappconnect: %{time_appconnect} \npretransfer: %{time_pretransfer} \nredirect: %{time_redirect} \nstarttransfer: %{time_starttransfer} \n---------\ntotal: %{time_total} \nsize: %{size_download}\n" \
-so /dev/null https://curl.haxx.se
# Example for printing all variables and their values by using an external file with the format
curl -o /dev/null -s --write-out '@files/curl.format.txt' https://curl.haxx.se- Single binary
curl
# Get the archive, extract (notice the xjf parameter to tar) and copy.
wget -O curl.tar.bz2 http://www.magicermine.com/demos/curl/curl/curl-7.30.0.ermine.tar.bz2 && \
tar xjf curl.tar.bz2 && \
cp curl-7.30.0.ermine/curl.ermine curl && \
./curl --help- Single static binaries Taken from this cool static-binaries repository
# tcpdump
curl -O https://raw.githubusercontent.com/yunchih/static-binaries/master/tcpdump- Single static binary
vi
# vi (vim)
curl -OL https://eldada.jfrog.io/artifactory/tools/x86_64/vi.tar.gz- Single static binary
jq(Linux). Look in https://stedolan.github.io/jq/download/ for additional flavors
# jq
curl -OL https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64- Get http code using wget (without curl)
In cases where curl is not available, use wget to get the http code returned from an HTTP endpoint
wget --spider -S -T 2 www.jfrog.org 2>&1 | grep "^ HTTP/" | awk '{print $2}' | tail -1-
Poor man's
topshell scripts (in Linux only!). Good for whentopis not installed
Get CPU and memory usage by processes on the current host. Also useful in Linux based Docker containers- Using data from
/proctop.sh script - Using data from
ps -eotop-ps.sh script
- Using data from
-
Process info (in Linux only!)
To get process info using its PID or search string: Command line, environment variables. Use procInfo.sh. -
Add file to WAR file addFileToWar.sh
The /proc file system has all the information about the running processes. See full description in the proc man page.
- Get current processes running (a simple alternative to
psin case it's missing)
for a in $(ls -d /proc/*/); do if [[ -f $a/exe ]]; then ls -l ${a}exe; fi; done- Get a process command line (see usage in procInfo.sh)
# Assume PID is the process ID you are looking at
cat /proc/${PID}/cmdline | tr '\0' ' '
# or
cat /proc/${PID}/cmdline | sed -z 's/$/ /g'- Get a process environment variables (see usage in procInfo.sh)
# Assume PID is the process ID you are looking at
cat /proc/${PID}/environ | tr '\0' '\n'
# or
cat /proc/${PID}/environ | sed -z 's/$/\n/g'- Get load average from disk instead of command
cat /proc/loadavg | awk '{print $1 ", " $2 ", " $3}'- Get top 10 processes IDs and names sorted with highest time waiting for disk IO (Aggregated block I/O delays, measured in clock ticks)
cut -d" " -f 1,2,42 /proc/[0-9]*/stat | sort -n -k 3 | tail -10- Full source in this gist
screenMacOS man page and bash man page- The
screencommand quick reference
# Start a new session with session name
screen -S <session_name>
# List running screens
screen -ls
# Attach to a running session
screen -x
# Attach to a running session with name
screen -r <session_name>
# Detach a running session
screen -d <session_name>- Screen commands are prefixed by an escape key, by default Ctrl-a (that's Control-a, sometimes written ^a). To send a literal Ctrl-a to the programs in screen, use Ctrl-a a. This is useful when when working with screen within screen. For example Ctrl-a a n will move screen to a new window on the screen within screen.
| Description | Command |
|---|---|
| Exit and close session | Ctrl-d or exit |
| Detach current session | Ctrl-a d |
| Detach and logout (quick exit) | Ctrl-a D D |
| Kill current window | Ctrl-a k |
| Exit screen | Ctrl-a : quit or exit all of the programs in screen |
| Force-exit screen | Ctrl-a C-\ (not recommended) |
- Help
| Description | Command |
|---|---|
| See help | Ctrl-a ? (Lists keybindings) |
Sysbench is a mutli-purpose benchmark that features tests for CPU, memory, I/O, and even database performance testing.
See full content for this section in linuxconfig.org's how to benchmark your linux system.
- Installation (Debian/Ubuntu)
sudo apt install sysbench- CPU benchmark
sysbench --test=cpu run- Memory benchmark
sysbench --test=memory run- I/O benchmark
sysbench --test=fileio --file-test-mode=seqwr runFrom the Apache HTTP server benchmarking tool page: "ab is a tool for benchmarking your Apache Hypertext Transfer Protocol (HTTP) server."
# A simple benchmarking of a web server. Running 100 requests with up to 10 concurrent requests
ab -n 100 -c 10 http://www.jfrog.com/A simple createLoad.sh script to create disk IO and CPU load in the current environment. This script just creates and deletes files in a temp directory which strains the CPU and disk IO.
WARNING: Running this script with many threads can bring a system to a halt or even crash it. USE WITH CARE!
./createLoad.sh --threads 10- Rebasing a branch on master
# Update local copy of master
git checkout master
git pull
# Rebase the branch on the updated master
git checkout my-branch
git rebase master
# Rebase and squash
git rebase master -i
# If problems are found, follow on screen instructions to resolve and complete the rebase.- Resetting a fork with upstream. WARNING: This will override any local changes in your fork!
git remote add upstream /url/to/original/repo
git fetch upstream
git checkout master
git reset --hard upstream/master
git push origin master --force - Add
Signed-off-byline by the committer at the end of the commit log message.
git commit -s -m "Your commit message"Some useful commands for debugging a java process
# Go to the java/bin directory
cd ${JAVA_HOME}/bin
# Get your java process id
PID=$(ps -ef | grep java | grep -v grep | awk '{print $2}')
# Get JVM native memory usage
# For this, you need your java process to run with the the -XX:NativeMemoryTracking=summary parameter
./jcmd ${PID} VM.native_memory summary
# Get all JVM info
./jinfo ${PID}
# Get JVM flags for a java process
./jinfo -flags ${PID}
# Get JVM heap info
./jcmd ${PID} GC.heap_info
# Get JVM Metaspace info
./jcmd ${PID} VM.metaspace
# Trigger a full GC
./jcmd ${PID} GC.run
# Java heap memory histogram
./jmap -histo ${PID}
- Allow a user to run docker commands without sudo
sudo usermod -aG docker user
# IMPORTANT: Log out and back in after this change!- See what Docker is using
docker system df- Prune Docker unused resources
# Prune system
docker system prune
# Remove all unused Docker images
docker system prune -a
# Prune only parts
docker image/container/volume/network prune- Remove dangling volumes
docker volume rm $(docker volume ls -f dangling=true -q)- Quit an interactive session without closing it:
# Ctrl + p + q (order is important)
- Attach back to it
docker attach <container-id>- Save a Docker image to be loaded in another computer
# Save
docker save -o ~/the.img the-image:tag
# Load into another Docker engine
docker load -i ~/the.img- Connect to Docker VM on Mac
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
# Ctrl +A +D to exit- Connect to Rancher Desktop VM on Mac
LIMA_HOME="${HOME}/Library/Application Support/rancher-desktop/lima" "/Applications/Rancher Desktop.app/Contents/Resources/resources/darwin/lima/bin/limactl" shell 0- Remove
noneimages (usually leftover failed docker builds)
docker images | grep none | awk '{print $3}' | xargs docker rmi- Using dive to analyse a Docker image
# Must pull the image before analysis
docker pull redis:latest
# Run using dive Docker image
docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock wagoodman/dive:latest redis:latest- Adding health checks for containers that check tcp port being opened without using netcat or other tools in your image
# Check if port 8081 is open
bash -c "</dev/tcp/localhost/8081" 2>/dev/null
[ $? -eq 0 ] && echo "Port 8081 on localhost is open"A collection of useful Docker tools
- A simple terminal UI for Docker and docker-compose: lazydocker
- A web based UI for local and remote Docker: Portainer
- Analyse a Docker image with dive
A few Dockerfiles I use in my work
- An Ubuntu with added tools and no root: Dockerfile-ubuntu-with-tools
# For a local build
docker build -f Dockerfile-ubuntu-with-tools -t eldada.jfrog.io/docker/ubuntu-with-tools:24.04 .
# Multi arch build and push
# If needed, create a buildx builder and use it
docker buildx create --platform linux/arm64,linux/amd64 --name build-amd64-arm64
docker buildx use build-amd64-arm64
# Multi arch build and push
docker buildx build --platform linux/arm64,linux/amd64 -f Dockerfile-ubuntu-with-tools -t eldada.jfrog.io/docker/ubuntu-with-tools:24.04 --push .- An Alpine with added tools: Dockerfile-alpine-with-tools
# For a local build
docker build -f Dockerfile-alpine-with-tools -t eldada.jfrog.io/docker/alpine-with-tools:3.21.0 .
# Multi arch build and push
# If needed, create a buildx builder and use it
docker buildx create --platform linux/arm64,linux/amd64 --name build-amd64-arm64
docker buildx use build-amd64-arm64
# Multi arch build and push
docker buildx build --platform linux/arm64,linux/amd64 -f Dockerfile-alpine-with-tools -t eldada.jfrog.io/docker/alpine-with-tools:3.21.0 --push .See Artifactory related scripts and examples in artifactory
A command line effect of the Matrix (the movie) text
while true; do
echo $LINES $COLUMNS $((RANDOM % $COLUMNS)) $(printf "\U$((RANDOM % 500))"); sleep 0.04;
done | awk '{a[$3]=0; for (x in a){o=a[x];a[x]=a[x]+1; printf "\033[%s;%sH\033[2;32m%s",o,x,$4; printf "\033[%s;%sH\033[1;37m%s\033[0;0H", a[x],x,$4; if (a[x]>=$1){a[x]=0;}}}'Contributing is more than welcome with a pull request