-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Description
What happened:
使用 spark 用户通过 hadoop fs -rm -r 删除 JuiceFS 中的目录时,操作失败并提示权限错误,具体是在尝试创建回收站目录 jfs://jfs3/user/spark/.Trash 时被拒绝。
[root@cdp1 ~]# su spark
[spark@cdp1 root]$ hadoop fs -ls jfs://jfs3/
Found 4 items
drwxrwxrwx - spark hdfs 4096 2026-01-22 16:20 jfs://jfs3/dir1
drwxrwxrwx - 4028543350 hdfs 4096 2026-01-22 16:25 jfs://jfs3/dir3
drwxrwxrwx - hdfs hdfs 4096 2026-01-22 16:26 jfs://jfs3/dir4
drwx------ - hdfs hdfs 4096 2026-01-22 16:28 jfs://jfs3/user
[spark@cdp1 root]$ hadoop fs -rm -r jfs://jfs3/dir1
26/01/22 16:31:48 WARN fs.TrashPolicyDefault: Can't create trash directory: jfs://jfs3/user/spark/.Trash/Current
org.apache.hadoop.security.AccessControlException: Permission denied: jfs://jfs3/user/spark/.Trash
at io.juicefs.JuiceFileSystemImpl.error(JuiceFileSystemImpl.java:299)
at io.juicefs.JuiceFileSystemImpl.mkdirs(JuiceFileSystemImpl.java:1811)
at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:336)
at org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:153)
at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:110)
at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:96)
at org.apache.hadoop.fs.shell.Delete$Rm.moveToTrash(Delete.java:154)
at org.apache.hadoop.fs.shell.Delete$Rm.processPath(Delete.java:119)
at org.apache.hadoop.fs.shell.Command.processPathInternal(Command.java:370)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:333)
at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:306)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:288)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:272)
at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:121)
at org.apache.hadoop.fs.shell.Command.run(Command.java:179)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:327)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:81)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:95)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:390)
rm: Failed to move to trash: jfs://jfs3/dir1: Permission denied: jfs://jfs3/user/spark/.Trash. Consider using -skipTrash option
What you expected to happen:
使用 spark 用户删除成功
How to reproduce it (as minimally and precisely as possible):
1、完成juicefs hadoop配置,确保hadoop可访问juicefs文件系统;
1、切换至hdfs用户,创建目录,然后删除其创建的目录;
3、切换至spark用户,创建目录,然后删除其创建的目录,操作失败;
Anything else we need to know?
1、刚开始juicefs根目录中没有user目录;
2、第一次删除操作时才触发生成user目录,且目录的uid和gid与操作用户保持一致,且mode为700;
3、切换用户后,第二次删除时,切换后的用户没有user目录的权限,无法在user目录下创建目录,因此导致删除失败;
4、将user目录mode修改为777可以解决该问题,或者删除时带上-skipTrash参数;
5、是否可以在首次生成user目录时,将mode设置为777?
[hdfs@cdp1 root]$ hadoop fs -ls jfs://jfs3/
Found 4 items
drwxrwxrwx - spark hdfs 4096 2026-01-22 16:20 jfs://jfs3/dir1
drwxrwxrwx - hdfs hdfs 4096 2026-01-22 16:25 jfs://jfs3/dir2
drwxrwxrwx - 4028543350 hdfs 4096 2026-01-22 16:25 jfs://jfs3/dir3
drwxrwxrwx - hdfs hdfs 4096 2026-01-22 16:26 jfs://jfs3/dir4
[hdfs@cdp1 root]$ hadoop fs -rm -r jfs://jfs3/dir2
26/01/22 16:28:09 INFO fs.TrashPolicyDefault: Moved: 'jfs://jfs3/dir2' to trash at: jfs://jfs3/user/hdfs/.Trash/Current/dir2
[hdfs@cdp1 root]$ hadoop fs -ls jfs://jfs3/
Found 4 items
drwxrwxrwx - spark hdfs 4096 2026-01-22 16:20 jfs://jfs3/dir1
drwxrwxrwx - 4028543350 hdfs 4096 2026-01-22 16:25 jfs://jfs3/dir3
drwxrwxrwx - hdfs hdfs 4096 2026-01-22 16:26 jfs://jfs3/dir4
drwx------ - hdfs hdfs 4096 2026-01-22 16:28 jfs://jfs3/user
Environment:
- JuiceFS version (use
juicefs --version) or Hadoop Java SDK version: juicefs 1.3.0/juicefs-hadoop-1.3.0.jar - Cloud provider or hardware configuration running JuiceFS:NA
- OS (e.g
cat /etc/os-release):NA - Kernel (e.g.
uname -a):NA - Object storage (cloud provider and region, or self maintained):NA
- Metadata engine info (version, cloud provider managed or self maintained):tikv-v6.1.7
- Network connectivity (JuiceFS to metadata engine, JuiceFS to object storage):NA
- Others: