You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What is the issue that we're seeing that is motivating this decision or change?
17
+
- The A.I world is moving fast with multiple runtime/ prebaked environment. We or the builder cannot cover just everything but rather we should adopt it and facillitate it as much as possible within Jan.
18
+
- For `Run your own A.I`: Builder can build app on Jan (NodeJS env) and connect to external endpoint which serves the real A.I
19
+
- e.g 1: Nitro acting as proxy to `triton-inference-server` running within a Docker container controlled by Jan app
20
+
- e.g 2: Original models can be in many formats (pytorch, paddlepaddle). In order to run it with the most optimized version locally, there must be a step to transpile the model ([Ollama import model](https://github.com/jmorganca/ollama/blob/main/docs/import.md), Tensorrt). Btw Jan can prebuilt it and let user pull but later
21
+
- For `Build your own A.I`: User can fine tune model locally (of course Jan help it with remote but later)
22
+
23
+
## Decision
24
+
25
+
What is the change that we're proposing and/or doing?
26
+
- Add Docker client as Core module - [Docker node](https://github.com/apocas/dockerode)
27
+
- 2 example A.I app (adr-002) to demonstrate it and actually use!
28
+
29
+
## Consequences
30
+
31
+
What becomes easier or more difficult to do because of this change?
0 commit comments