You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jan 7, 2026. It is now read-only.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The reason will be displayed to describe this comment to others. Learn more.
The only workflow downside that I see with this approach is that you can't as easily zoom in into projects any more: If I open my editor on ./spark-api/, I need to keep my CWD in . and remember to run npm run test:api. It would be nice if I could cd into that folder and just run npm test. Is this a concern for you?
What about instead we give every subrepo their own package.json and node_modules? We can still have a parent package.json on the top level, which will shell out to the sub-parent.jsons for running tests etc. I think this matches the lerna experience closer as well. But are there downsides to this approach?
The reason will be displayed to describe this comment to others. Learn more.
The only workflow downside that I see with this approach is that you can't as easily zoom in into projects any more: If I open my editor on ./spark-api/, I need to keep my CWD in . and remember to run npm run test:api. It would be nice if I could cd into that folder and just run npm test. Is this a concern for you?
Makes sense. I agree we should at least try to make this workflow easier.
I usually open my editor/IDE in the monorepo root, so this is not a concern to me.
My thoughts:
When you run npm in the subdirectory and there is no package.json file, npm will walk up the directory tree to the monorepo root and use the package.json file in the root.
However, npm test will set the working directory to the directory containing the package.json file, so it will run all tests.
You can run something like npx mocha test, but that's ugly.
You can also run npm run test:spark-api, but that's not ideal either.
We can create a small package.json file containing only the scripts:
What about instead we give every subrepo their own package.json and node_modules? We can still have a parent package.json on the top level, which will shell out to the sub-parent.jsons for running tests etc. I think this matches the lerna experience closer as well. But are there downsides to this approach?
Yes, this is the Lerna/npm workspace setup.
From what I remember, this setup makes many tasks slower:
npm install needs to install multiple copies of the same package when it's used by multiple components.
Depending on the install strategy, you can end up with all packages installed in the root node_modules folder. As a result, the code in spark-publish can import dependencies declared in spark-api only and we won't notice that.
We probably need to configure TypeScript with a multi-project setup. The last time I used such a setup in a large monorepo (~2021), it took seconds to minutes for tsc just to load the entire tree and decide what needs to be (re)checked.
The reason will be displayed to describe this comment to others. Learn more.
Do you see any downsides with that?
The downside would be that if you add a dependency from the subdirectory, it won't be added to the right place.
Yes, this is the Lerna/npm workspace setup.
From what I remember, this setup makes many tasks slower:
npm install needs to install multiple copies of the same package when it's used by multiple components.
To me, npm install time doesn't matter these days. I don't run it often, and the cache + my internet are fast enough. If it's a significant issue on your setup, or for your workflow, I'm happy to value this more.
Depending on the install strategy, you can end up with all packages installed in the root node_modules folder. As a result, the code in spark-publish can import dependencies declared in spark-api only and we won't notice that.
In which situation would all packagers be installed in the root folder? I believe that when you run cd api && npm install and there's api/package.json, then dependencies have to go to abi/node_modules/.
We probably need to configure TypeScript with a multi-project setup. The last time I used such a setup in a large monorepo (~2021), it took seconds to minutes for tsc just to load the entire tree and decide what needs to be (re)checked.
Ah right I remember that from our talk. I definitely don't want to introduce TS headaches here.
The reason will be displayed to describe this comment to others. Learn more.
To me, npm install time doesn't matter these days. I don't run it often, and the cache + my internet are fast enough. If it's a significant issue on your setup, or for your workflow, I'm happy to value this more.
I kind of resigned on slow npm installs, until I had a chance to experience how fast is pnpm and ever since then I am missing its speed. Switching from npm to pnpm is out of scope of this work though.
Depending on the install strategy, you can end up with all packages installed in the root node_modules folder. As a result, the code in spark-publish can import dependencies declared in spark-api only and we won't notice that.
In which situation would all packagers be installed in the root folder? I believe that when you run cd api && npm install and there's api/package.json, then dependencies have to go to abi/node_modules/.
npm's default configuration is to hoist dependencies. It will always install as many dependencies as possible in the root node_module folder.
Default: "hoisted"
Type: "hoisted", "nested", "shallow", or "linked"
Sets the strategy for installing packages in node_modules. hoisted (default): Install non-duplicated in top-level, and duplicated as necessary within directory structure. nested: (formerly --legacy-bundling) install in place, no hoisting. shallow (formerly --global-style) only install direct deps at top-level. linked: (experimental) install in node_modules/.store, link in place, unhoisted.
We probably need to configure TypeScript with a multi-project setup. The last time I used such a setup in a large monorepo (~2021), it took seconds to minutes for tsc just to load the entire tree and decide what needs to be (re)checked.
Ah right I remember that from our talk. I definitely don't want to introduce TS headaches here.
We also don't have TypeScript checks configured in this repository yet. I am willing to take a risk and not worry about this until I start working on adding tsc.
I think we should be fine as long as our monorepo stays small (less than 10 sub-packages, less than 100 source files).
```
Run tests and linters:
```bash
npm test
npm run test:api
```
## Deployment
Expand All
@@ -105,5 +105,5 @@ Pushes to `main` will be deployed automatically.
Perform manual devops using [Fly.io](https://fly.io):
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.