-
-
Notifications
You must be signed in to change notification settings - Fork 502
Add postgres support #1254
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add postgres support #1254
Changes from all commits
0d335fd
4f1a7d6
9cceef6
91fe59e
4a628bb
986cec3
d477ecb
4759125
e1599b7
2dc3932
29d037b
cba109f
0f66ede
cf89d7c
23998ee
1da3755
732b49b
0489879
377690d
3a29903
b5508be
8ae365c
9afa8c1
6ac3392
707181a
dbf555f
790a557
9e070d5
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,55 @@ | ||
| """Convert user and oauth_account UUID columns to native UUID type for PostgreSQL | ||
|
|
||
| This migration fixes the CHAR(36) vs native UUID type mismatch that causes | ||
| PostgreSQL to reject queries with the error: | ||
| "operator does not exist: character = uuid" | ||
|
|
||
| FastAPI-users expects native UUID columns in PostgreSQL. The initial migration | ||
| created CHAR(36) columns (an SQLite compatibility workaround). This migration | ||
| converts those columns to the native UUID type on PostgreSQL. It is a no-op | ||
| for SQLite, which has no native UUID type. | ||
|
|
||
| Revision ID: a1b2c3d4e5f6 | ||
| Revises: 4937b0e0647c | ||
| Create Date: 2026-02-28 00:00:00.000000 | ||
|
|
||
| """ | ||
|
|
||
| from typing import Sequence, Union | ||
|
|
||
| from alembic import op | ||
| import sqlalchemy as sa | ||
|
|
||
| # revision identifiers, used by Alembic. | ||
| revision: str = "a1b2c3d4e5f6" | ||
| down_revision: Union[str, Sequence[str], None] = "4937b0e0647c" | ||
| branch_labels: Union[str, Sequence[str], None] = None | ||
| depends_on: Union[str, Sequence[str], None] = None | ||
|
|
||
|
|
||
| def upgrade() -> None: | ||
| """Convert CHAR(36) UUID columns to native UUID type on PostgreSQL.""" | ||
| connection = op.get_bind() | ||
|
|
||
| if connection.dialect.name != "postgresql": | ||
| # SQLite has no native UUID type — nothing to do | ||
| return | ||
|
|
||
| # Convert user.id from CHAR(36) to native UUID | ||
| op.execute(sa.text('ALTER TABLE "user" ALTER COLUMN id TYPE uuid USING id::uuid')) | ||
|
|
||
| # Convert oauth_account.id and oauth_account.user_id from CHAR(36) to native UUID | ||
| op.execute(sa.text("ALTER TABLE oauth_account ALTER COLUMN id TYPE uuid USING id::uuid")) | ||
| op.execute(sa.text("ALTER TABLE oauth_account ALTER COLUMN user_id TYPE uuid USING user_id::uuid")) | ||
|
Comment on lines
+39
to
+43
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧩 Analysis chain🏁 Script executed: find . -path "*/alembic/versions/*" -name "*uuid*.py" -type fRepository: transformerlab/transformerlab-app Length of output: 149 🏁 Script executed: ls -la api/alembic/versions/ | head -20Repository: transformerlab/transformerlab-app Length of output: 1151 🏁 Script executed: # Search for the specific migration file
fd "a1b2c3d4e5f6" api/alembic/versions/Repository: transformerlab/transformerlab-app Length of output: 147 🏁 Script executed: # Get all migration files to understand the naming pattern
git ls-files api/alembic/versions/ | sortRepository: transformerlab/transformerlab-app Length of output: 747 🏁 Script executed: cat -n api/alembic/versions/a1b2c3d4e5f6_convert_uuid_columns_for_postgres.pyRepository: transformerlab/transformerlab-app Length of output: 2584 🏁 Script executed: # Check other migrations to see existing patterns for data validation or guards
rg -A 10 "invalid|validate|check" api/alembic/versions/ --type pyRepository: transformerlab/transformerlab-app Length of output: 1411 🏁 Script executed: # Check if there are any existing patterns that validate data before migrations
rg -B 5 -A 10 "LIMIT 1|fetchone" api/alembic/versions/ --type pyRepository: transformerlab/transformerlab-app Length of output: 3348 🏁 Script executed: # Check the initial migration to understand how user.id and oauth_account columns were created
cat -n api/alembic/versions/f7661070ec23_initial_migration_create_all_tables.py | head -150Repository: transformerlab/transformerlab-app Length of output: 8182 🏁 Script executed: # Look for how user.id was initially populated
rg -B 5 -A 5 'user.*id' api/alembic/versions/f7661070ec23_initial_migration_create_all_tables.py | head -50Repository: transformerlab/transformerlab-app Length of output: 1336 🏁 Script executed: # Check if there are any other migrations that populate these tables
cat -n api/alembic/versions/c175b784119c_create_oauth_account_table.pyRepository: transformerlab/transformerlab-app Length of output: 2483 🏁 Script executed: # Check the rest of the initial migration to see how user table is defined
sed -n '150,300p' api/alembic/versions/f7661070ec23_initial_migration_create_all_tables.pyRepository: transformerlab/transformerlab-app Length of output: 5417 🏁 Script executed: # Search specifically for the user table creation
rg -A 20 'create_table.*"user"' api/alembic/versions/f7661070ec23_initial_migration_create_all_tables.pyRepository: transformerlab/transformerlab-app Length of output: 59 🏁 Script executed: # Check if there are any default values or inserts that populate user.id with non-UUID values
rg -B 5 -A 10 'INSERT INTO.*user' api/alembic/versions/ --type pyRepository: transformerlab/transformerlab-app Length of output: 59 🏁 Script executed: # Check how user.id and oauth_account columns are populated in the application code
find api -name "*.py" -type f | xargs grep -l "user.*id.*uuid\|User.*id" | head -10Repository: transformerlab/transformerlab-app Length of output: 504 🏁 Script executed: # Check if there's any application code that populates user.id
rg -l "CREATE.*user\|INSERT.*user" api --type pyRepository: transformerlab/transformerlab-app Length of output: 59 🏁 Script executed: # Check the models to see how user.id is defined
fd "models" api --type f -name "*.py" | head -5Repository: transformerlab/transformerlab-app Length of output: 307 🏁 Script executed: # Test PostgreSQL UUID casting behavior with invalid data
python3 << 'EOF'
# According to PostgreSQL documentation, casting an invalid string to UUID will raise an error
# Let's verify the regex pattern suggested in the guard is correct for UUID v1-5
import re
# RFC 4122 UUID v1-5 regex from the suggested guard
uuid_pattern = r'^[0-9a-f]{8}-[0-9a-f]{4}-[1-5][0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}$'
test_cases = [
('550e8400-e29b-41d4-a716-446655440000', True), # Valid v4
('6ba7b810-9dad-11d1-80b4-00c04fd430c8', True), # Valid v1
('00000000-0000-0000-0000-000000000000', False), # Invalid version (not 1-5)
('550e8400-e29b-61d4-a716-446655440000', False), # Invalid version (6, outside 1-5)
('invalid-uuid-string', False), # Invalid format
('550e8400e29b41d4a716446655440000', False), # Missing hyphens
]
for test_uuid, should_be_valid in test_cases:
matches = bool(re.match(uuid_pattern, test_uuid, re.IGNORECASE))
print(f"UUID: {test_uuid:40} | Matches: {matches:5} | Expected: {should_be_valid:5} | {'✓' if matches == should_be_valid else '✗'}")
EOFRepository: transformerlab/transformerlab-app Length of output: 592 🏁 Script executed: # Check how user.id is generated in the models
cat -n api/transformerlab/models/users.pyRepository: transformerlab/transformerlab-app Length of output: 23576 🏁 Script executed: # Check how oauth_account records are created
rg -B 5 -A 10 "oauth_account\|OAuthAccount" api --type py | head -100Repository: transformerlab/transformerlab-app Length of output: 59 🏁 Script executed: # Check the db.py to see how UUIDs are handled
cat -n api/transformerlab/db/db.py | head -80Repository: transformerlab/transformerlab-app Length of output: 3275 🏁 Script executed: # Check if there's any place where oauth_account.id is manually set
rg -B 5 -A 5 "oauth_account.*id\|OAuthAccount" api/transformerlab --type py | head -150Repository: transformerlab/transformerlab-app Length of output: 59 🏁 Script executed: # Look for how oauth_account records are inserted or created
rg "add_oauth_account\|oauth_account" api/transformerlab/shared --type py -B 3 -A 10Repository: transformerlab/transformerlab-app Length of output: 59 🏁 Script executed: # Check if there's uuid generation or defaults in the models
cat api/transformerlab/shared/models/models.py | grep -A 30 "class OAuthAccount"Repository: transformerlab/transformerlab-app Length of output: 1587 🏁 Script executed: # Check how oauth_account.id is generated by fastapi-users
rg "SQLAlchemyBaseOAuthAccountTableUUID" api --type py -B 5 -A 15Repository: transformerlab/transformerlab-app Length of output: 3086 🏁 Script executed: # Look at the actual implementation of how IDs are generated in oauth accounts
find api -name "*.py" -type f -exec grep -l "OAuthAccount\|oauth_account" {} \; | xargs grep -l "id.*uuid\|generate\|default" | head -5Repository: transformerlab/transformerlab-app Length of output: 307 🏁 Script executed: # Check if fastapi-users generates the oauth_account.id as UUID
python3 << 'EOF'
# According to fastapi-users documentation:
# SQLAlchemyBaseOAuthAccountTableUUID uses UUID for the id field
# The id is typically auto-generated when creating oauth accounts
# Let's check what fastapi-users does
print("FastAPI-Users behavior:")
print("- SQLAlchemyBaseOAuthAccountTableUUID: defines oauth_account table with UUID id")
print("- oauth_account.id: auto-generated as UUID (not user-provided)")
print("- oauth_account.user_id: references user.id (should be valid UUID)")
print()
print("Risk assessment:")
print("- If all oauth_account.id values were generated by fastapi-users: should be valid UUIDs")
print("- If all oauth_account.user_id values reference existing user.id: should be valid UUIDs")
print("- HOWEVER: if data was manually inserted or corrupted, invalid UUIDs could exist")
EOFRepository: transformerlab/transformerlab-app Length of output: 585 🏁 Script executed: # Check if there are any schema migrations that might have inserted manual data
rg -B 10 -A 10 "INSERT\|execute.*text" api/alembic/versions/f7661070ec23_initial_migration_create_all_tables.py | grep -A 5 -B 5 "user\|oauth"Repository: transformerlab/transformerlab-app Length of output: 59 🏁 Script executed: # Check if there are any INSERT statements or manual data population in migrations
rg -B 5 -A 10 "INSERT\|VALUES" api/alembic/versions/ --type py | head -50Repository: transformerlab/transformerlab-app Length of output: 59 🏁 Script executed: # Verify the coding guideline about migrations
grep -A 5 "alembic.*migration" /dev/stdin << 'EOF'
`api/alembic/versions/*.py`: Use existing patterns for database tables and do not create any table with a foreign key
Use existing patterns for Alembic migrations and do not create any migration with a foreign key; try to autogenerate the migration if possible
EOFRepository: transformerlab/transformerlab-app Length of output: 120 🏁 Script executed: # Check the coding guideline statement directly
cat << 'EOF'
Coding Guideline for api/alembic/versions/*.py:
"Use existing patterns for database tables and do not create any table with a foreign key"
"Use existing patterns for Alembic migrations and do not create any migration with a foreign key; try to autogenerate the migration if possible"
EOFRepository: transformerlab/transformerlab-app Length of output: 360 🏁 Script executed: # Verify if the migration violates the foreign key guideline
cat api/alembic/versions/a1b2c3d4e5f6_convert_uuid_columns_for_postgres.pyRepository: transformerlab/transformerlab-app Length of output: 2199 Add validation guard before casting CHAR(36) columns to UUID type. The migration converts While FastAPI-users auto-generates valid UUIDs for these columns in normal operation, defensive validation before the schema change is prudent and follows the pattern already established in other migrations (e.g., Suggested guard def upgrade() -> None:
"""Convert CHAR(36) UUID columns to native UUID type on PostgreSQL."""
connection = op.get_bind()
if connection.dialect.name != "postgresql":
# SQLite has no native UUID type — nothing to do
return
+ invalid_user_id = connection.execute(
+ sa.text(
+ """
+ SELECT id FROM "user"
+ WHERE id IS NOT NULL
+ AND id !~* '^[0-9a-f]{8}-[0-9a-f]{4}-[1-5][0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}$'
+ LIMIT 1
+ """
+ )
+ ).fetchone()
+ if invalid_user_id:
+ raise RuntimeError(f'Invalid UUID in "user.id": {invalid_user_id[0]}')
+
# Convert user.id from CHAR(36) to native UUID
op.execute(sa.text('ALTER TABLE "user" ALTER COLUMN id TYPE uuid USING id::uuid'))🤖 Prompt for AI Agents |
||
|
|
||
|
|
||
| def downgrade() -> None: | ||
| """Revert native UUID columns back to VARCHAR(36) on PostgreSQL.""" | ||
| connection = op.get_bind() | ||
|
|
||
| if connection.dialect.name != "postgresql": | ||
| return | ||
|
|
||
| op.execute(sa.text('ALTER TABLE "user" ALTER COLUMN id TYPE varchar(36) USING id::text')) | ||
| op.execute(sa.text("ALTER TABLE oauth_account ALTER COLUMN id TYPE varchar(36) USING id::text")) | ||
| op.execute(sa.text("ALTER TABLE oauth_account ALTER COLUMN user_id TYPE varchar(36) USING user_id::text")) | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -23,86 +23,76 @@ def upgrade() -> None: | |
| """Upgrade schema.""" | ||
| connection = op.get_bind() | ||
|
|
||
| # Check existing columns | ||
| column_result = connection.execute(sa.text("PRAGMA table_info(config)")) | ||
| existing_columns = [row[1] for row in column_result.fetchall()] | ||
|
|
||
| # Get existing indexes by querying SQLite directly | ||
| # SQLite stores unique constraints as unique indexes | ||
| index_result = connection.execute( | ||
| sa.text("SELECT name FROM sqlite_master WHERE type='index' AND tbl_name='config'") | ||
| ) | ||
| existing_index_names = [row[0] for row in index_result.fetchall()] | ||
|
|
||
| # Add columns first (outside batch mode to avoid circular dependency) | ||
| # Only add if they don't already exist | ||
| if "user_id" not in existing_columns: | ||
| op.add_column("config", sa.Column("user_id", sa.String(), nullable=True)) | ||
| if "team_id" not in existing_columns: | ||
| op.add_column("config", sa.Column("team_id", sa.String(), nullable=True)) | ||
|
|
||
| # Handle indexes outside of batch mode to avoid type inference issues | ||
| # Drop existing unique index on key if it exists (to recreate as non-unique) | ||
| if "ix_config_key" in existing_index_names: | ||
| # Check if it's unique by querying the index definition | ||
| index_info = connection.execute( | ||
| sa.text("SELECT sql FROM sqlite_master WHERE type='index' AND name='ix_config_key'") | ||
| ).fetchone() | ||
| if index_info and index_info[0] and "UNIQUE" in index_info[0].upper(): | ||
| # Drop the unique index using raw SQL to avoid batch mode issues | ||
| connection.execute(sa.text("DROP INDEX IF EXISTS ix_config_key")) | ||
| existing_index_names.remove("ix_config_key") # Update our list | ||
|
|
||
| # Create new indexes (non-unique) - these can be done outside batch mode | ||
| if "ix_config_key" not in existing_index_names: | ||
| # Add columns (outside batch mode to avoid circular dependency) | ||
| op.add_column("config", sa.Column("user_id", sa.String(), nullable=True)) | ||
| op.add_column("config", sa.Column("team_id", sa.String(), nullable=True)) | ||
|
|
||
| # Drop old unique index on key if it exists, then recreate as non-unique | ||
| try: | ||
| op.drop_index("ix_config_key", table_name="config") | ||
| except Exception: | ||
|
Comment on lines
+30
to
+33
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧩 Analysis chain🏁 Script executed: # Find the migration files mentioned in the review
find api -name "*initial_migration_create_all_tables*" -o -name "*add_team_id_to_config_table*"Repository: transformerlab/transformerlab-app Length of output: 214 🏁 Script executed: # List all migration files to understand the structure
ls -la api/alembic/versions/ 2>/dev/null | head -20Repository: transformerlab/transformerlab-app Length of output: 1151 🏁 Script executed: # Search for the specific migration files by their IDs
rg -l "f7661070ec23|c78d76a6d65c" api/alembic/versions/Repository: transformerlab/transformerlab-app Length of output: 347 🏁 Script executed: cat -n api/alembic/versions/f7661070ec23_initial_migration_create_all_tables.py | grep -A 10 "config" | head -50Repository: transformerlab/transformerlab-app Length of output: 3066 🏁 Script executed: # Get the full initial migration file to see config table creation
wc -l api/alembic/versions/f7661070ec23_initial_migration_create_all_tables.pyRepository: transformerlab/transformerlab-app Length of output: 153 🏁 Script executed: # Read specific lines around the config table creation
sed -n '1,100p' api/alembic/versions/f7661070ec23_initial_migration_create_all_tables.py | cat -nRepository: transformerlab/transformerlab-app Length of output: 5494 🏁 Script executed: cat -n api/alembic/versions/c78d76a6d65c_add_team_id_to_config_table.pyRepository: transformerlab/transformerlab-app Length of output: 6667 🏁 Script executed: # Search for any other references to dropping the constraint in this or related migrations
rg -A 5 -B 5 "drop_constraint" api/alembic/versions/Repository: transformerlab/transformerlab-app Length of output: 1061 🏁 Script executed: # Check if there's a way to inspect constraint names in the codebase
rg -B 2 -A 2 "UniqueConstraint" api/alembic/versions/f7661070ec23_initial_migration_create_all_tables.py | head -20Repository: transformerlab/transformerlab-app Length of output: 1044 Dropping The initial migration creates the config table with both a Suggested fix+ inspector = sa.inspect(connection)
+ for uc in inspector.get_unique_constraints("config"):
+ if uc.get("column_names") == ["key"] and uc.get("name"):
+ op.drop_constraint(uc["name"], "config", type_="unique")
+ break🤖 Prompt for AI Agents |
||
| pass # Index doesn't exist or already dropped | ||
|
|
||
| # Create indexes (will fail silently if they already exist in some databases) | ||
| try: | ||
| op.create_index("ix_config_key", "config", ["key"], unique=False) | ||
| if "ix_config_user_id" not in existing_index_names: | ||
| except Exception: | ||
| pass | ||
|
|
||
| try: | ||
| op.create_index("ix_config_user_id", "config", ["user_id"], unique=False) | ||
| if "ix_config_team_id" not in existing_index_names: | ||
| except Exception: | ||
| pass | ||
|
|
||
| try: | ||
| op.create_index("ix_config_team_id", "config", ["team_id"], unique=False) | ||
| except Exception: | ||
| pass | ||
|
|
||
| # For SQLite, unique constraints are stored as unique indexes | ||
| # Create the unique constraint as a unique index using raw SQL to avoid batch mode issues | ||
| if "uq_config_user_team_key" not in existing_index_names: | ||
| connection.execute( | ||
| sa.text("CREATE UNIQUE INDEX IF NOT EXISTS uq_config_user_team_key ON config(user_id, team_id, key)") | ||
| ) | ||
| # Create unique constraint on (user_id, team_id, key) | ||
| try: | ||
| op.create_unique_constraint("uq_config_user_team_key", "config", ["user_id", "team_id", "key"]) | ||
| except Exception: | ||
|
Comment on lines
+53
to
+55
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🧩 Analysis chain🏁 Script executed: cat -n api/alembic/versions/c78d76a6d65c_add_team_id_to_config_table.pyRepository: transformerlab/transformerlab-app Length of output: 6667 Unique constraint Line 27 adds 🤖 Prompt for AI Agents |
||
| pass # Constraint already exists | ||
|
|
||
| # Migrate existing configs to admin user's first team | ||
| # Note: Don't call connection.commit() - Alembic manages transactions | ||
| connection = op.get_bind() | ||
|
|
||
| # Find admin user's first team | ||
| users_teams = sa.table("users_teams", sa.column("user_id"), sa.column("team_id")) | ||
| users = sa.table("user", sa.column("id"), sa.column("email")) | ||
|
|
||
| admin_team_result = connection.execute( | ||
| sa.text(""" | ||
| SELECT ut.team_id | ||
| FROM users_teams ut | ||
| JOIN user u ON ut.user_id = u.id | ||
| WHERE u.email = '[email protected]' | ||
| LIMIT 1 | ||
| """) | ||
| sa.select(users_teams.c.team_id) | ||
| .select_from(users_teams.join(users, users_teams.c.user_id == users.c.id)) | ||
| .where(users.c.email == "[email protected]") | ||
| .limit(1) | ||
| ) | ||
| admin_team_row = admin_team_result.fetchone() | ||
|
|
||
| if admin_team_row: | ||
| admin_team_id = admin_team_row[0] | ||
| # Update all existing configs (where team_id is NULL) to use admin team | ||
| config_table = sa.table("config", sa.column("team_id")) | ||
| connection.execute( | ||
| sa.text("UPDATE config SET team_id = :team_id WHERE team_id IS NULL"), {"team_id": admin_team_id} | ||
| sa.update(config_table).where(config_table.c.team_id.is_(None)).values(team_id=admin_team_id) | ||
| ) | ||
| print(f"✅ Migrated existing configs to team {admin_team_id}") | ||
| else: | ||
| # If no admin team found, try to get any user's first team | ||
| any_team_result = connection.execute(sa.text("SELECT team_id FROM users_teams LIMIT 1")) | ||
| any_team_result = connection.execute(sa.select(users_teams.c.team_id).limit(1)) | ||
| any_team_row = any_team_result.fetchone() | ||
| if any_team_row: | ||
| any_team_id = any_team_row[0] | ||
| config_table = sa.table("config", sa.column("team_id")) | ||
| connection.execute( | ||
| sa.text("UPDATE config SET team_id = :team_id WHERE team_id IS NULL"), {"team_id": any_team_id} | ||
| sa.update(config_table).where(config_table.c.team_id.is_(None)).values(team_id=any_team_id) | ||
| ) | ||
| print(f"✅ Migrated existing configs to team {any_team_id}") | ||
| else: | ||
| # No teams found, delete existing configs | ||
| deleted_count = connection.execute(sa.text("DELETE FROM config WHERE team_id IS NULL")).rowcount | ||
| config_table = sa.table("config", sa.column("team_id")) | ||
| deleted_count = connection.execute(sa.delete(config_table).where(config_table.c.team_id.is_(None))).rowcount | ||
| print(f"⚠️ No teams found, deleted {deleted_count} config entries") | ||
| # ### end Alembic commands ### | ||
|
|
||
|
|
@@ -111,33 +101,31 @@ def downgrade() -> None: | |
| """Downgrade schema.""" | ||
| connection = op.get_bind() | ||
|
|
||
| # Check existing indexes | ||
| index_result = connection.execute( | ||
| sa.text("SELECT name FROM sqlite_master WHERE type='index' AND tbl_name='config'") | ||
| ) | ||
| existing_index_names = [row[0] for row in index_result.fetchall()] | ||
|
|
||
| # Check existing columns | ||
| column_result = connection.execute(sa.text("PRAGMA table_info(config)")) | ||
| existing_columns = [row[1] for row in column_result.fetchall()] | ||
|
|
||
| # Drop indexes and constraints outside of batch mode to avoid type inference issues | ||
| # Drop unique constraint (stored as unique index in SQLite) | ||
| if "uq_config_user_team_key" in existing_index_names: | ||
| connection.execute(sa.text("DROP INDEX IF EXISTS uq_config_user_team_key")) | ||
| # Drop unique constraint | ||
| try: | ||
| op.drop_constraint("uq_config_user_team_key", "config", type_="unique") | ||
| except Exception: | ||
| pass # Constraint doesn't exist | ||
|
|
||
| # Drop indexes | ||
| if "ix_config_team_id" in existing_index_names: | ||
| try: | ||
| op.drop_index("ix_config_team_id", table_name="config") | ||
| if "ix_config_user_id" in existing_index_names: | ||
| except Exception: | ||
| pass | ||
|
|
||
| try: | ||
| op.drop_index("ix_config_user_id", table_name="config") | ||
| if "ix_config_key" in existing_index_names: | ||
| except Exception: | ||
| pass | ||
|
|
||
| try: | ||
| op.drop_index("ix_config_key", table_name="config") | ||
| except Exception: | ||
| pass | ||
|
|
||
| # Drop columns using raw SQL to avoid batch mode type inference issues | ||
| # SQLite doesn't support DROP COLUMN directly, so we recreate the table | ||
| if "team_id" in existing_columns or "user_id" in existing_columns: | ||
| # Create new table without user_id and team_id columns | ||
| # Drop columns - SQLite < 3.35.0 doesn't support DROP COLUMN, so recreate table | ||
| if connection.dialect.name == "sqlite": | ||
| # Recreate table without user_id and team_id columns for SQLite compatibility | ||
| connection.execute( | ||
| sa.text(""" | ||
| CREATE TABLE config_new ( | ||
|
|
@@ -147,15 +135,24 @@ def downgrade() -> None: | |
| ) | ||
| """) | ||
| ) | ||
| # Copy data from old table to new table (only id, key, value columns) | ||
| connection.execute(sa.text("INSERT INTO config_new (id, key, value) SELECT id, key, value FROM config")) | ||
| # Drop old table (this also drops all indexes) | ||
| connection.execute(sa.text("DROP TABLE config")) | ||
| # Rename new table to original name | ||
| connection.execute(sa.text("ALTER TABLE config_new RENAME TO config")) | ||
| # Recreate the original unique index on key (it was dropped with the old table) | ||
| op.create_index("ix_config_key", "config", ["key"], unique=True) | ||
| else: | ||
| # If we're not dropping columns, just recreate the unique index on key | ||
| # PostgreSQL and modern SQLite support DROP COLUMN | ||
| try: | ||
| op.drop_column("config", "team_id") | ||
| except Exception: | ||
| pass | ||
|
|
||
| try: | ||
| op.drop_column("config", "user_id") | ||
| except Exception: | ||
| pass | ||
|
|
||
| # Recreate the original unique index on key | ||
| try: | ||
| op.create_index("ix_config_key", "config", ["key"], unique=True) | ||
| except Exception: | ||
| pass | ||
| # ### end Alembic commands ### | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
Repository: transformerlab/transformerlab-app
Length of output: 1281
🏁 Script executed:
Repository: transformerlab/transformerlab-app
Length of output: 2904
🏁 Script executed:
Repository: transformerlab/transformerlab-app
Length of output: 1027
🏁 Script executed:
Repository: transformerlab/transformerlab-app
Length of output: 1160
🏁 Script executed:
Repository: transformerlab/transformerlab-app
Length of output: 155
🏁 Script executed:
Repository: transformerlab/transformerlab-app
Length of output: 207
🏁 Script executed:
Repository: transformerlab/transformerlab-app
Length of output: 740
🏁 Script executed:
Repository: transformerlab/transformerlab-app
Length of output: 938
🏁 Script executed:
Repository: transformerlab/transformerlab-app
Length of output: 329
Add mandatory type hints to
include_objectfunction and ensure psycopg2 dependency is available for PostgreSQL migrations.Lines 29-34 define
include_objectwithout type hints, violating the mandatory type hints requirement. Add argument and return type annotations.Additionally, the URL normalization on lines 41-42 converts
postgresql+asyncpg://topostgresql://, which relies on psycopg2 as SQLAlchemy's default PostgreSQL driver. However,psycopg2orpsycopg2-binaryis not listed inapi/pyproject.tomldependencies, causing PostgreSQL migrations to fail at runtime withModuleNotFoundError: No module named 'psycopg2'. Add the dependency to the appropriate optional dependency group (nvidia/rocm/cpu) alongsideasyncpg.🤖 Prompt for AI Agents