-
Notifications
You must be signed in to change notification settings - Fork 164
Open
Labels
Description
Hi,
I am using the bulk export feature as per documentation.
After creating S3 destination I check the status of the export job using this script
curl --request GET \
--url 'https://api.smith.langchain.com/api/v1/bulk-exports/{export_id}' \
--header 'Content-Type: application/json' \
--header 'X-API-Key: YOUR_API_KEY' \
--header 'X-Tenant-Id: YOUR_WORKSPACE_ID'
the result are as follows
{
"bulk_export_destination_id": "ceb0XXXX",
"session_id": "9a04XXXXXXX",
"start_time": "2025-10-28T00:00:00+00:00",
"end_time": "2025-10-29T23:59:59+00:00",
"filter": "and(eq(is_root, true), eq(feedback_key, \"prompt_intent\"))",
"format": "Parquet",
"compression": "gzip",
"interval_hours": null,
"id": "4a09bXXXXXX",
"tenant_id": "6bd3XXXXXX",
"status": "Running",
"created_at": "2025-10-29T10:55:36.008282+00:00",
"updated_at": "2025-10-29T10:56:01.616411+00:00",
"finished_at": null,
"source_bulk_export_id": null
}
I am exporting 1 day worth of traces (~600 traces in this case), I can see that the traces are present in the S3 bucket (not sure if all of them) but whenever I query the status with the details it returns Running state.
It's been 1h and the status doesn't change.
This causes 2 issues:
- Even if all the traces are in the S3 how they should, as long as the status is not Completed, there is no visibility to whether task finished or not. This is very problematic for proper data pipeline orchestration.
- I am testing on 1 day (~600) traces, eventually would like to export all the traces, which is more than 50K. If it takes so long with small number, the export of 50K is not feasible.
Can someone please look into it, and check if that might be a bug on LangSmith side?
Thank you