-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
build: cross compile linux #2020
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
9913644 to
91f286a
Compare
9683060 to
e3a7510
Compare
|
@chewi I created some toolchain files. I figured this is the better approach long term, and they should be able to be re-used for people building outside of our docker images. The aarch64 image builds successfully! https://github.com/LizardByte/Sunshine/actions/runs/7521536314/job/20472490455 Some things to work out before bringing these changes to the other images.
|
150696f to
58f9177
Compare
|
2c6e0d8 to
26a90ed
Compare
|
x86_64 and aarch64 are building now |
7ffccdf to
e258c97
Compare
438f04b to
2f7ae64
Compare
|
|
f2bdf91 to
2f7ae64
Compare
|
I tried installing sunshine under build platform architecture, and copying it to the arch specific image, but it fails to copy. # syntax=docker/dockerfile:1.4
# artifacts: true
# platforms: linux/amd64,linux/arm64/v8
# platforms_pr: linux/amd64,linux/arm64/v8
# no-cache-filters: sunshine-base,artifacts,sunshine
ARG BASE=fedora
ARG TAG=39
FROM --platform=$BUILDPLATFORM ${BASE}:${TAG} AS sunshine-base
ARG TARGETPLATFORM
ARG TAG
ENV TAG=${TAG}
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
# setup env
WORKDIR /env
RUN <<_ENV
#!/bin/bash
set -e
case "${TARGETPLATFORM}" in
linux/amd64)
TARGETARCH=x86_64
echo GCC_FLAVOR="" >> ./env
echo "DNF=( dnf -y --releasever \"${TAG}\" --forcearch \"${TARGETARCH}\" )" >> ./env
echo "DNF2=( dnf -y --installroot /mnt/install --releasever \"${TAG}\" --forcearch \"${TARGETARCH}\" )" >> ./env
;;
linux/arm64)
TARGETARCH=aarch64
echo GCC_FLAVOR="-${TARGETARCH}-linux-gnu" >> ./env
echo "DNF=( dnf -y --installroot /mnt/cross --releasever \"${TAG}\" --forcearch \"${TARGETARCH}\" )" >> ./env
echo "DNF2=( dnf -y --installroot /mnt/install --releasever \"${TAG}\" --forcearch \"${TARGETARCH}\" )" >> ./env
;;
*)
echo "unsupported platform: ${TARGETPLATFORM}";
exit 1
;;
esac
echo TARGETARCH=${TARGETARCH} >> ./env
echo TUPLE=${TARGETARCH}-linux-gnu >> ./env
_ENV
FROM sunshine-base as sunshine-build
# reused args from base
ARG TARGETPLATFORM
# args from ci workflow
ARG BRANCH
ARG BUILD_VERSION
ARG COMMIT
# note: BUILD_VERSION may be blank
ENV BRANCH=${BRANCH}
ENV BUILD_VERSION=${BUILD_VERSION}
ENV COMMIT=${COMMIT}
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
# install build dependencies
# hadolint ignore=DL3041
RUN <<_DEPS_A
#!/bin/bash
set -e
# shellcheck source=/dev/null
source /env/env
dnf -y update
dnf -y install \
cmake-3.27.* \
gcc"${GCC_FLAVOR}"-13.2.* \
gcc-c++"${GCC_FLAVOR}"-13.2.* \
git-core \
nodejs \
pkgconf-pkg-config \
rpm-build \
wayland-devel \
wget \
which
dnf clean all
_DEPS_A
# install host dependencies
# hadolint ignore=DL3041
RUN <<_DEPS_B
#!/bin/bash
set -e
# shellcheck source=/dev/null
source /env/env
# Initialize an array for packages
packages=(
boost-devel-1.81.0*
glibc-devel
libappindicator-gtk3-devel
libcap-devel
libcurl-devel
libdrm-devel
libevdev-devel
libnotify-devel
libstdc++-devel
libva-devel
libvdpau-devel
libX11-devel
libxcb-devel
libXcursor-devel
libXfixes-devel
libXi-devel
libXinerama-devel
libXrandr-devel
libXtst-devel
mesa-libGL-devel
miniupnpc-devel
numactl-devel
openssl-devel
opus-devel
pulseaudio-libs-devel
wayland-devel
)
# Conditionally include arch specific packages
if [[ "${TARGETARCH}" == 'x86_64' ]]; then
packages+=(intel-mediasdk-devel)
fi
"${DNF[@]}" install \
filesystem
# Install packages using the array
"${DNF[@]}" --setopt=tsflags=noscripts install "${packages[@]}"
# Clean up
"${DNF[@]}" clean all
_DEPS_B
# todo - enable cuda once it's supported for gcc 13 and fedora 39
## install cuda
#WORKDIR /build/cuda
## versions: https://developer.nvidia.com/cuda-toolkit-archive
#ENV CUDA_VERSION="12.0.0"
#ENV CUDA_BUILD="525.60.13"
## hadolint ignore=SC3010
#RUN <<_INSTALL_CUDA
##!/bin/bash
#set -e
#
## shellcheck source=/dev/null
#source /env/env
#cuda_prefix="https://developer.download.nvidia.com/compute/cuda/"
#cuda_suffix=""
#if [[ "${TARGETARCH}" == 'aarch64' ]]; then
# cuda_suffix="_sbsa"
#fi
#url="${cuda_prefix}${CUDA_VERSION}/local_installers/cuda_${CUDA_VERSION}_${CUDA_BUILD}_linux${cuda_suffix}.run"
#echo "cuda url: ${url}"
#wget "$url" --progress=bar:force:noscroll -q --show-progress -O ./cuda.run
#chmod a+x ./cuda.run
#./cuda.run --silent --toolkit --toolkitpath=/build/cuda --no-opengl-libs --no-man-page --no-drm
#rm ./cuda.run
#_INSTALL_CUDA
# copy repository
WORKDIR /build/sunshine/
COPY --link .. .
# setup build directory
WORKDIR /build/sunshine/build
# cmake and cpack
# todo - add cmake argument back in for cuda support "-DCMAKE_CUDA_COMPILER:PATH=/build/cuda/bin/nvcc \"
# todo - re-enable "DSUNSHINE_ENABLE_CUDA"
RUN <<_MAKE
#!/bin/bash
set -e
# shellcheck source=/dev/null
source /env/env
# shellcheck disable=SC2086
if [[ "${TARGETARCH}" == 'aarch64' ]]; then
CXX_FLAG_1="$(echo /mnt/cross/usr/include/c++/[0-9]*/)"
CXX_FLAG_2="$(echo /mnt/cross/usr/include/c++/[0-9]*/${TUPLE%%-*}-*/)"
LD_FLAG="$(echo /mnt/cross/usr/lib/gcc/${TUPLE%%-*}-*/[0-9]*/)"
export \
CXXFLAGS="-isystem ${CXX_FLAG_1} -isystem ${CXX_FLAG_2}" \
LDFLAGS="-L${LD_FLAG}" \
PKG_CONFIG_LIBDIR=/mnt/cross/usr/lib64/pkgconfig:/mnt/cross/usr/share/pkgconfig \
PKG_CONFIG_SYSROOT_DIR=/mnt/cross \
PKG_CONFIG_SYSTEM_INCLUDE_PATH=/mnt/cross/usr/include \
PKG_CONFIG_SYSTEM_LIBRARY_PATH=/mnt/cross/usr/lib64
fi
TOOLCHAIN_OPTION=""
if [[ "${TARGETARCH}" != 'x86_64' ]]; then
TOOLCHAIN_OPTION="-DCMAKE_TOOLCHAIN_FILE=toolchain-${TUPLE}.cmake"
fi
cmake \
"$TOOLCHAIN_OPTION" \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_INSTALL_PREFIX=/usr \
-DSUNSHINE_ASSETS_DIR=share/sunshine \
-DSUNSHINE_EXECUTABLE_PATH=/usr/bin/sunshine \
-DSUNSHINE_ENABLE_WAYLAND=ON \
-DSUNSHINE_ENABLE_X11=ON \
-DSUNSHINE_ENABLE_DRM=ON \
-DSUNSHINE_ENABLE_CUDA=OFF \
/build/sunshine
make -j "$(nproc)"
cpack -G RPM
_MAKE
FROM scratch AS artifacts
ARG BASE
ARG TAG
ARG TARGETARCH
COPY --link --from=sunshine-build /build/sunshine/build/cpack_artifacts/Sunshine.rpm /sunshine-${BASE}-${TAG}-${TARGETARCH}.rpm
FROM sunshine-base as sunshine-install
# copy deb from builder
COPY --link --from=artifacts /sunshine*.rpm /sunshine.rpm
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
# install sunshine using custom DNF command
RUN <<_INSTALL_SUNSHINE
#!/bin/bash
set -e
# shellcheck source=/dev/null
source /env/env
"${DNF2[@]}" update
"${DNF2[@]}" install /sunshine.rpm
"${DNF2[@]}" clean all
rm -rf /var/cache/yum
_INSTALL_SUNSHINE
FROM ${BASE}:${TAG} AS sunshine
# copy installed files from sunshine-install
COPY --from=sunshine-install /mnt/install/ /usr/
# validate sunshine is installed
RUN <<_VALIDATE_SUNSHINE
#!/bin/bash
set -e
# make sure sunshine is at /usr/bin/sunshine
if [[ ! -f /usr/bin/sunshine ]]; then
echo "sunshine not found at /usr/bin/sunshine"
exit 1
fi
# make sure the version command works
sunshine --version
_VALIDATE_SUNSHINE
# network setup
EXPOSE 47984-47990/tcp
EXPOSE 48010
EXPOSE 47998-48000/udp
# setup user
ARG PGID=1000
ENV PGID=${PGID}
ARG PUID=1000
ENV PUID=${PUID}
ENV TZ="UTC"
ARG UNAME=lizard
ENV UNAME=${UNAME}
ENV HOME=/home/$UNAME
# setup user
RUN <<_SETUP_USER
#!/bin/bash
set -e
groupadd -f -g "${PGID}" "${UNAME}"
useradd -lm -d ${HOME} -s /bin/bash -g "${PGID}" -u "${PUID}" "${UNAME}"
mkdir -p ${HOME}/.config/sunshine
ln -s ${HOME}/.config/sunshine /config
chown -R ${UNAME} ${HOME}
_SETUP_USER
USER ${UNAME}
WORKDIR ${HOME}
# entrypoint
ENTRYPOINT ["/usr/bin/sunshine"]output snip => [sunshine-install 2/2] RUN <<_INSTALL_SUNSHINE (#!/bin/bash...) 119.3s
=> ERROR [sunshine 2/5] COPY --from=sunshine-install /mnt/install/ /usr/ 0.0s
------
> [sunshine 2/5] COPY --from=sunshine-install /mnt/install/ /usr/:
------
cannot replace to directory /var/lib/docker/overlay2/s6q271u3bmdgrx1gx21bca2on/merged/usr/bin with file
Failed to deploy '<unknown> Dockerfile: docker/fedora-39.dockerfile': Image build failed with exit code 1. |
4f672eb to
a79567e
Compare
|
Ah, I kinda wish you'd held off with Debian as I'd been working on Ubuntu myself. I only had CUDA to fix. Please take a look for comparison. I've provided two different approaches for doing CUDA, but I think using Ubuntu's (or Debian's) own packages works best overall. Also take note of the Did you work out the platform stuff yet, where you only use QEMU at the end? I think you did, but I'm not sure. |
Yea, before I dug in I thought I could just use the run file for the other arch, but it seems to fail when running on x86_64... and it fails in a very weird way. It must completely crash the container or something. I discovered the cross installation method last night, but didn't implement it yet. I'll probably not use the ubuntu provided packages as for 22.04 it is cuda 11.5 and I'd guess for 20.04 is even older. I found this method. https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=arm64-sbsa&Compilation=Cross&Distribution=Ubuntu&target_version=22.04&target_type=deb_network_cross But they don't list Debian like they do for the run files, and I'm not sure the Ubuntu packages will work for that. I'll take a look at your PR for reference, but would it be possible for you to put your changes into the existing docker files? That will make it easier to see the changes.
No, I couldn't figure it out. This was my best attempt. #2020 (comment) |
Does it matter? Sunshine still builds with the older ones regardless, right? Or does that prevent it from working with newer GPUs?
That method is one of the two I submitted. If you look at the individual commits, you'll see it. I think you only need a few header files, so that may be a way to take this approach while still supporting Debian and ppc64el.
Will do. |
It builds, but with less capability. See here:
|
|
I've been working with the Fedora 39 Dockerfile you have in your branch. I saw your approach above, and I can't see why it fails, but I didn't try it yet because I thought I might be able to do it a different way. I began with I looked around to see what other people do. It seems that they either avoid doing I'll now try your approach or something like it, as it feels like the only sensible option at this point. |
|
I got it to work, but for some reason it installed development packages and came out huge. Will figure it out later. |
Co-Authored-By: James Le Cuirot <[email protected]>
a79567e to
4e49a8d
Compare
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #2020 +/- ##
=========================================
- Coverage 6.11% 6.10% -0.01%
=========================================
Files 85 85
Lines 18303 18303
Branches 8319 8319
=========================================
- Hits 1119 1118 -1
+ Misses 15375 15374 -1
- Partials 1809 1811 +2
Flags with carried forward coverage won't be shown. Click here to find out more. |
|
Closing this for now, might revisit it at a later point. |

Description
Draft PR based on #2018
Todo:
Screenshot
Issues Fixed or Closed
Type of Change
.github/...)Checklist
Branch Updates
LizardByte requires that branches be up-to-date before merging. This means that after any PR is merged, this branch
must be updated before it can be merged. You must also
Allow edits from maintainers.