Skip to content

Commit 981c550

Browse files
committed
Update documentation to reality
Signed-off-by: Ben Firshman <[email protected]>
1 parent ed630c1 commit 981c550

File tree

3 files changed

+26
-18
lines changed

3 files changed

+26
-18
lines changed

CONTRIBUTING.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ As much as possible, this is attempting to follow the [Standard Go Project Layou
2424
- `pkg/client/` - Client used by the CLI to communicate with the server.
2525
- `pkg/database/` - Used by the server to store metadata about models.
2626
- `pkg/docker/` - Various interfaces with Docker for building and running containers.
27-
- `pkg/model/` - Model, repo, and configs (`cog.yaml`).
27+
- `pkg/model/` - Versions, repos, and configs (`cog.yaml`).
2828
- `pkg/server/` - Server for storing data and building Docker images.
2929
- `pkg/serving/` - Runs inferences to test models.
3030
- `pkg/settings/` - Manages `.cog` directory in repos and `.config/cog/` directory for user settings.

README.md

Lines changed: 5 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -40,18 +40,16 @@ environment:
4040
python_version: "3.8"
4141
python_requirements: "requirements.txt"
4242
system_packages:
43-
- libgl1-mesa-glx
44-
- libglib2.0-0
43+
- libgl1-mesa-glx
44+
- libglib2.0-0
4545
```
4646
4747
3. Push it to a repository and build it:
4848
4949
```
50-
$ cog build
50+
$ cog push
5151
--> Uploading '.' to repository http://10.1.2.3/colorization... done
52-
--> Building CPU Docker image... done
53-
--> Building GPU Docker image... done
54-
--> Built model b6a2f8a2d2ff
52+
--> Created version b6a2f8a2d2ff
5553
```
5654

5755
This has:
@@ -72,7 +70,7 @@ $ cog infer b6a2f8a2d2ff -i @input.png -o @output.png
7270
It is also just a Docker image, so you can run it as an HTTP service wherever Docker runs:
7371

7472
```
75-
$ cog show b6a2f8a2d2ff
73+
$ cog show b6a2f8a2d2ff
7674
...
7775
Docker image (GPU): registry.hooli.net/colorization:b6a2f8a2d2ff-gpu
7876
Docker image (CPU): registry.hooli.net/colorization:b6a2f8a2d2ff-cpu

docs/getting-started.md

Lines changed: 20 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,36 +1,46 @@
11
# Getting started with Cog
22

3-
First step is to start a server. You'll need to point it at a Docker registry to store the Docker images For example, [create one on Google Cloud Container Registry](https://cloud.google.com/container-registry/docs/quickstart).
3+
First step is to start a server. You'll need to point it at a Docker registry to store the Docker images For example, [create one on Google Cloud Container Registry](https://cloud.google.com/container-registry/docs/quickstart).
44

55
Then, start the server, pointing at your Docker registry:
66

77
cog server --port=8080 --docker-registry=gcr.io/some-project/cog
88

9-
109
Next, let's build a model. We have [some models you can play around with](https://github.com/replicate/cog-examples). Clone that repository (you'll need git-lfs) and hook up that directory to your Cog server:
1110

1211
cd example-models/inst-colorization/
1312
cog repo set localhost:8080/examples/inst-colorization
1413

15-
Then, let's build it:
14+
Take a look at `cog.yaml` and `model.py` to see what we're building.
15+
16+
Then, let's push it:
17+
18+
cog push
1619

17-
cog build
20+
This has uploaded the currently directory to the server and the server has stored that as a version of your model. In the background, it is now building two Docker images (CPU and GPU variants) that will run your model. You can see the status of this build with `cog show`, replacing the ID with your model's ID:
1821

19-
This will take a few minutes. In the meantime take a look at `cog.yaml` and `model.py` to see what we're building.
22+
$ cog show b31f9f72d8f14f0eacc5452e85b05c957b9a8ed9
23+
...
24+
CPU image: Building. Run `cog build logs 8f3a05c42ee5` to see its progress.
25+
GPU image: Building. Run `cog build logs b087a0bb5b7a` to see its progress.
2026

21-
When that has finished, you can run inferences on the built model from any machine that is pointed at the server. Replace the ID with your model's ID, and the file with an image on your disk you want to colorize:
27+
When the build has finished, you can run inferences on the built model from any machine that is pointed at the server. Replace the ID with your model's ID, and the file with an image on your disk you want to colorize:
2228

2329
cog infer b31f9f72d8f14f0eacc5452e85b05c957b9a8ed9 -i @hotdog.jpg
2430

2531
You can also list the models for this repo:
2632

2733
cog list
2834

29-
You can see more details about the model:
35+
## Deploying the model
36+
37+
Cog builds Docker images for each version of your model. Those Docker images serve an HTTP inference API (to be documented).
38+
39+
To get the Docker image, run this with the version you want to deploy:
3040

3141
cog show b31f9f72d8f14f0eacc5452e85b05c957b9a8ed9
3242

33-
In this output is the Docker image. You can run this anywhere a Docker image runs to deploy your model. For example:
43+
In this output is the name of the CPU or GPU Docker images, dependending on whether you are deploying on CPU or GPU. You can run these anywhere a Docker image runs. For example:
3444

35-
$ docker run -d -p 8000:8000 --gpus all registry.hooli.net/colorization:b6a2f8a2d2ff-gpu
36-
$ curl http://localhost:8000/infer -F [email protected]
45+
$ docker run -d -p 5000:5000 --gpus all registry.hooli.net/colorization:b6a2f8a2d2ff-gpu
46+
$ curl http://localhost:5000/infer -F [email protected]

0 commit comments

Comments
 (0)