diff --git a/docs/en/deployment/hadoop_java_sdk.md b/docs/en/deployment/hadoop_java_sdk.md
index 05944cd333a7..539babd234ae 100644
--- a/docs/en/deployment/hadoop_java_sdk.md
+++ b/docs/en/deployment/hadoop_java_sdk.md
@@ -741,6 +741,55 @@ JuiceFS can use local disk as a cache to accelerate data access, the following d
![parquet](../images/spark_sql_parquet.png)
+## Permission control by Apache Ranger
+
+JuiceFS currently supports path permission control by integrating with Apache Ranger's HDFS module.
+
+### 1. Configurations
+
+| Configuration | Default Value | Description |
+|-----------------------------------|---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `juicefs.ranger-rest-url` | | `ranger`'s HTTP link url. Not configured means not using this feature. |
+| `juicefs.ranger-service-name` | | `ranger`'s `service name` in `HDFS` module, required |
+| `juicefs.ranger-cache-dir` | | `ranger`'s policies cache path. By default, a `UUID` path hierarchy is added under the environment variable `java.io.tmpdir` to prevent multitasking from interfering with each other. After configuring a fixed directory, multiple tasks will share the cache, and only one JuiceFS is responsible for cache refreshing, to reduce the pressure on connecting to `Ranger Admin`. |
+| `juicefs.ranger-poll-interval-ms` | `30000` | `ranger`'s interval to refresh cache, default is 30s |
+
+### 2. Dependencies
+
+Considering the complexity of the authentication environment and the possibility of dependency conflicts, the JAR packages related to Ranger authentication (such as `ranger-plugins-common-2.3.0.jar`, `ranger-plugins-audit-2.3.0.jar`, etc.) and their dependencies have not been included in the JuiceFS SDK.
+
+If occurred the `ClassNotFound` error when use, it is recommended to import it into the relevant directory (such as `$SPARK-HOME/jars`)
+
+Some dependencies may need:
+
+```shell
+ranger-plugins-common-2.3.0.jar
+ranger-plugins-audit-2.3.0.jar
+gethostname4j-1.0.0.jar
+jackson-jaxrs-1.9.13.jar
+jersey-client-1.19.jar
+jersey-core-1.19.jar
+jna-5.7.0.jar
+```
+
+### 3. Tips
+
+#### 3.1 Ranger version
+
+The code is tested on `Ranger2.3` and `Ranger2.4`. As no other features are used except for `HDFS` module authentication, theoretically all other versions are applicable.
+
+#### 3.2 Ranger Audit
+
+Currently, only support authentication function, and the `Ranger Audit` is disabled.
+
+#### 3.3 Ranger's other parameters
+
+To improve usage efficiency, currently only support some **CORE** parameters of Ranger.
+
+#### 3.4 Security tips
+
+Due to the complete open source of the project, it is unavoidable for users to disrupt permission control by replacing parameters such as `juicefs.ranger.rest-url`. If stricter control is required, it is recommended to compile the code independently and solve the problem by encrypting relevant security parameters.
+
## FAQ
### 1. `Class io.juicefs.JuiceFileSystem not found` exception
diff --git a/docs/zh_cn/deployment/hadoop_java_sdk.md b/docs/zh_cn/deployment/hadoop_java_sdk.md
index ee925582d8c6..f3c76b121b4e 100644
--- a/docs/zh_cn/deployment/hadoop_java_sdk.md
+++ b/docs/zh_cn/deployment/hadoop_java_sdk.md
@@ -866,6 +866,55 @@ JuiceFS 可以使用本地磁盘作为缓存加速数据访问,以下数据是
![parquet](../images/spark_sql_parquet.png)
+## 使用 Apache Ranger 进行权限管控
+
+JuiceFS 当前支持对接 Apache Ranger 的 `HDFS` 模块进行路径的权限管控。
+
+### 1. 相关配置
+
+| 配置项 | 默认值 | 描述 |
+|-----------------------------------|----------|--------------------------------------------------------------------------------------------------------------------------------|
+| `juicefs.ranger-rest-url` | | `ranger`连接地址。不配置该参数即不使用该功能。 |
+| `juicefs.ranger-service-name` | | `ranger`中配置的`service name`,必填 |
+| `juicefs.ranger-cache-dir` | | `ranger`策略的缓存路径。默认在环境变量`java.io.tmpdir`下,添加`UUID`路径层级防止多任务相互影响。当配置固定目录后,多个任务会共享缓存,有且仅有一个JuiceFS对象负责缓存刷新,减少对连接`Ranger Admin`压力。 |
+| `juicefs.ranger-poll-interval-ms` | `30000` | `ranger`缓存刷新周期,默认30s |
+
+### 2. 环境及依赖
+
+考虑到鉴权环境的复杂性,以及依赖冲突的可能性,Ranger 鉴权相关 JAR 包(例如`ranger-plugins-common-2.3.0.jar`,`ranger-plugins-audit-2.3.0.jar`等)及其依赖并未打进 JuiceFS 的 SDK 中。
+
+使用中如果遇到`ClassNotFound`报错,建议单独引入相关目录中(例如`$SPARK_HOME/jars`)
+
+可能需要单独添加的依赖:
+
+```shell
+ranger-plugins-common-2.3.0.jar
+ranger-plugins-audit-2.3.0.jar
+gethostname4j-1.0.0.jar
+jackson-jaxrs-1.9.13.jar
+jersey-client-1.19.jar
+jersey-core-1.19.jar
+jna-5.7.0.jar
+```
+
+### 3. 使用提示
+
+#### 3.1 Ranger版本
+
+当前代码测试基于`Ranger2.3`和`Ranger2.4`版本,因除`HDFS`模块鉴权外并未使用其他特性,理论上其他版本均适用。
+
+#### 3.2 Ranger Audit
+
+当前仅支持鉴权功能,`Ranger Audit`功能已关闭。
+
+#### 3.3 Ranger其他参数
+
+为提升使用效率,当前仅开放连接 Ranger 最核心的参数。
+
+#### 3.4 安全性问题
+
+因项目代码完全开源,无法避免用户通过替换`juicefs.ranger.rest-url`等参数的方式扰乱安全管控。如需更严格的管控,建议自主编译代码,通过将相关安全参数进行加密处理等方式解决。
+
## FAQ
### 1. 出现 `Class io.juicefs.JuiceFileSystem not found` 异常
diff --git a/sdk/java/pom.xml b/sdk/java/pom.xml
index a7e0ce0bf7c1..dd251c07ad90 100644
--- a/sdk/java/pom.xml
+++ b/sdk/java/pom.xml
@@ -350,6 +350,33 @@
+
+ org.apache.ranger
+ ranger-plugins-common
+ 2.3.0
+
+
+ *
+ *
+
+
+
+
+ org.apache.ranger
+ ranger-plugins-audit
+ 2.3.0
+
+
+ *
+ *
+
+
+
+
+ org.apache.httpcomponents
+ httpclient
+ 4.5.13
+
diff --git a/sdk/java/src/main/java/io/juicefs/JuiceFileSystemImpl.java b/sdk/java/src/main/java/io/juicefs/JuiceFileSystemImpl.java
index 6cafdd7ee245..a86076733bbf 100644
--- a/sdk/java/src/main/java/io/juicefs/JuiceFileSystemImpl.java
+++ b/sdk/java/src/main/java/io/juicefs/JuiceFileSystemImpl.java
@@ -19,6 +19,8 @@
import com.kenai.jffi.internal.StubLoader;
import io.juicefs.exception.QuotaExceededException;
import io.juicefs.metrics.JuiceFSInstrumentation;
+import io.juicefs.permission.RangerConfig;
+import io.juicefs.permission.RangerPermissionChecker;
import io.juicefs.utils.*;
import jnr.ffi.LibraryLoader;
import jnr.ffi.Memory;
@@ -89,11 +91,20 @@ static String loadVersion() {
private Path workingDir;
private String name;
+ private String user;
+ private String group;
+ private Set groups;
+ private String superuser;
+ private String supergroup;
private URI uri;
private long blocksize;
private int minBufferSize;
private int cacheReplica;
private boolean fileChecksumEnabled;
+ private static boolean permissionCheckEnabled = false;
+ private final boolean isSuperGroupFileSystem;
+ private JuiceFileSystemImpl superGroupFileSystem;
+ private RangerPermissionChecker rangerPermissionChecker;
private static Libjfs lib = loadLibrary();
private long handle;
@@ -270,7 +281,6 @@ private IOException error(int errno, Path p) {
return new FileNotFoundException(pStr+ ": not found");
} else if (errno == EACCESS) {
try {
- String user = ugi.getShortUserName();
FileStatus stat = getFileStatusInternalNoException(p);
if (stat != null) {
FsPermission perm = stat.getPermission();
@@ -305,6 +315,7 @@ private IOException error(int errno, Path p) {
}
public JuiceFileSystemImpl() {
+ this.isSuperGroupFileSystem = false;
}
@Override
@@ -356,16 +367,23 @@ public void initialize(URI uri, Configuration conf) throws IOException {
minBufferSize = conf.getInt("juicefs.min-buffer-size", 128 << 10);
cacheReplica = Integer.parseInt(getConf(conf, "cache-replica", "1"));
fileChecksumEnabled = Boolean.parseBoolean(getConf(conf, "file.checksum", "false"));
+ permissionCheckEnabled = getConf(conf, "ranger-rest-url", null) != null;
this.ugi = UserGroupInformation.getCurrentUser();
- String user = ugi.getShortUserName();
- String group = "nogroup";
+ user = ugi.getShortUserName();
+ group = "nogroup";
String groupingFile = getConf(conf, "groups", null);
if (isEmpty(groupingFile) && ugi.getGroupNames().length > 0) {
group = String.join(",", ugi.getGroupNames());
}
- String superuser = getConf(conf, "superuser", "hdfs");
- String supergroup = getConf(conf, "supergroup", conf.get("dfs.permissions.superusergroup", "supergroup"));
+ groups = Arrays.stream(group.split(",")).collect(Collectors.toSet());
+ superuser = getConf(conf, "superuser", "hdfs");
+ supergroup = getConf(conf, "supergroup", conf.get("dfs.permissions.superusergroup", "supergroup"));
+ if (permissionCheckEnabled && isSuperGroupFileSystem) {
+ group = supergroup;
+ groups.clear();
+ groups.add(supergroup);
+ }
String mountpoint = getConf(conf, "mountpoint", "");
synchronized (JuiceFileSystemImpl.class) {
@@ -478,11 +496,75 @@ public void initialize(URI uri, Configuration conf) throws IOException {
JuiceFSInstrumentation.init(this, statistics);
}
+ if (permissionCheckEnabled) {
+ try {
+ if (!isSuperGroupFileSystem) {
+ RangerConfig rangerConfig = checkAndGetRangerParams(conf);
+ Configuration superConf = new Configuration(conf);
+ superGroupFileSystem = new JuiceFileSystemImpl(true);
+ superGroupFileSystem.initialize(uri, superConf);
+ rangerPermissionChecker = new RangerPermissionChecker(superGroupFileSystem, rangerConfig, user, group);
+ }
+ } catch (Exception e) {
+ if (rangerPermissionChecker != null) {
+ rangerPermissionChecker.cleanUp();
+ }
+ throw new RuntimeException("The initialization of the Permission Checker has failed. ", e);
+ }
+ }
+
String uidFile = getConf(conf, "users", null);
if (!isEmpty(uidFile) || !isEmpty(groupingFile)) {
updateUidAndGrouping(uidFile, groupingFile);
- refreshUidAndGrouping(uidFile, groupingFile);
+ if (!isSuperGroupFileSystem) {
+ refreshUidAndGrouping(uidFile, groupingFile);
+ }
+ }
+ }
+
+ private RangerConfig checkAndGetRangerParams(Configuration conf) throws RuntimeException, IOException {
+ String rangerRestUrl = getConf(conf, "ranger-rest-url", "");
+ if (!rangerRestUrl.startsWith("http")) {
+ throw new IOException("illegal value for parameter 'juicefs.ranger-rest-url': " + rangerRestUrl);
+ }
+
+ String serviceName = getConf(conf, "ranger-service-name", "");
+ if (serviceName.isEmpty()) {
+ throw new IOException("illegal value for parameter 'juicefs.ranger-service-name': " + serviceName);
}
+
+ String cacheDir = getConf(conf, "ranger-cache-dir", System.getProperty("java.io.tmpdir") + "/" + UUID.randomUUID());
+ String pollIntervalMs = getConf(conf, "ranger-poll-interval-ms", "30000");
+
+ return new RangerConfig(rangerRestUrl, serviceName, cacheDir, pollIntervalMs);
+ }
+
+ private JuiceFileSystemImpl(boolean isSuperGroupFileSystem) {
+ this.isSuperGroupFileSystem = isSuperGroupFileSystem;
+ }
+
+ private boolean hasSuperPermission() {
+ return user.equals(superuser) || groups.contains(supergroup);
+ }
+
+ private boolean needCheckPermission() {
+ return permissionCheckEnabled && !hasSuperPermission();
+ }
+
+ private boolean checkPathAccess(Path path, FsAction action, String operation) throws IOException {
+ return rangerPermissionChecker.checkPermission(path, false, null, null, action, operation);
+ }
+
+ private boolean checkParentPathAccess(Path path, FsAction action, String operation) throws IOException {
+ return rangerPermissionChecker.checkPermission(path, false, null, action, null, operation);
+ }
+
+ private boolean checkAncestorAccess(Path path, FsAction action, String operation) throws IOException {
+ return rangerPermissionChecker.checkPermission(path, false, action, null, null, operation);
+ }
+
+ private boolean checkOwner(Path path, String operation) throws IOException {
+ return rangerPermissionChecker.checkPermission(path, true, null, null, null, operation);
}
private boolean isEmpty(String str) {
@@ -553,6 +635,7 @@ private void updateUidAndGrouping(String uidFile, String groupFile) throws IOExc
}
lib.jfs_update_uid_grouping(handle, uidstr, grouping);
+ groups = Arrays.stream(group.split(",")).collect(Collectors.toSet());
}
private void refreshUidAndGrouping(String uidFile, String groupFile) {
@@ -580,7 +663,7 @@ private void initializeStorageIds(Configuration conf) throws IOException {
@Override
public Path getHomeDirectory() {
- return makeQualified(new Path(homeDirPrefix + "/" + ugi.getShortUserName()));
+ return makeQualified(new Path(homeDirPrefix + "/" + user));
}
private static void initStubLoader() {
@@ -763,7 +846,7 @@ private void initCache(Configuration conf) {
}
private void refreshCache(Configuration conf) {
- BgTaskUtil.startScheduleTask(name, "Node fetcher", () -> {
+ BgTaskUtil.startScheduleTask(name, "Node fetcher", () -> {
initCache(conf);
}, 10, 10, TimeUnit.MINUTES);
}
@@ -816,6 +899,11 @@ public BlockLocation[] getFileBlockLocations(FileStatus file, long start, long l
if (file == null) {
return null;
}
+
+ if (needCheckPermission() && !checkPathAccess(file.getPath(), FsAction.READ, "getFileBlockLocations")) {
+ return superGroupFileSystem.getFileBlockLocations(file, start, len);
+ }
+
if (start < 0 || len < 0) {
throw new IllegalArgumentException("Invalid start or len parameter");
}
@@ -1050,6 +1138,9 @@ public synchronized void close() throws IOException {
@Override
public FSDataInputStream open(Path f, int bufferSize) throws IOException {
+ if (needCheckPermission() && !checkPathAccess(f, FsAction.READ, "open")) {
+ return superGroupFileSystem.open(f, bufferSize);
+ }
statistics.incrementReadOps(1);
ByteBuffer fileLen = ByteBuffer.allocate(8);
fileLen.order(ByteOrder.nativeOrder());
@@ -1063,6 +1154,10 @@ public FSDataInputStream open(Path f, int bufferSize) throws IOException {
@Override
public void access(Path path, FsAction mode) throws IOException {
+ if (needCheckPermission() && !checkPathAccess(path, mode, "access")) {
+ superGroupFileSystem.access(path, mode);
+ return;
+ }
int r = lib.jfs_access(Thread.currentThread().getId(), handle, normalizePath(path), mode.ordinal());
if (r < 0)
throw error(r, path);
@@ -1232,6 +1327,9 @@ public boolean hasCapability(String capability) {
@Override
public FSDataOutputStream append(Path f, int bufferSize, Progressable progress) throws IOException {
+ if (needCheckPermission() && !checkPathAccess(f, FsAction.WRITE, "append")) {
+ return superGroupFileSystem.append(f, bufferSize, progress);
+ }
statistics.incrementWriteOps(1);
int fd = lib.jfs_open(Thread.currentThread().getId(), handle, normalizePath(f), null, MODE_MASK_W);
if (fd < 0)
@@ -1245,6 +1343,13 @@ public FSDataOutputStream append(Path f, int bufferSize, Progressable progress)
@Override
public FSDataOutputStream create(Path f, FsPermission permission, boolean overwrite, int bufferSize,
short replication, long blockSize, Progressable progress) throws IOException {
+ if (needCheckPermission() && !checkAncestorAccess(f, FsAction.WRITE, "create")) {
+ if (!overwrite || !superGroupFileSystem.exists(f)) {
+ return superGroupFileSystem.create(f, permission, overwrite, bufferSize, replication, blockSize, progress);
+ } else if (!checkPathAccess(f, FsAction.WRITE, "create")) {
+ return superGroupFileSystem.create(f, permission, overwrite, bufferSize, replication, blockSize, progress);
+ }
+ }
statistics.incrementWriteOps(1);
while (true) {
int fd = lib.jfs_create(Thread.currentThread().getId(), handle, normalizePath(f), permission.toShort(), uMask.toShort());
@@ -1280,6 +1385,13 @@ private int checkBufferSize(int size) {
@Override
public FSDataOutputStream createNonRecursive(Path f, FsPermission permission, EnumSet flag,
int bufferSize, short replication, long blockSize, Progressable progress) throws IOException {
+ if (needCheckPermission() && !checkAncestorAccess(f, FsAction.WRITE, "createNonRecursive")) {
+ if (!flag.contains(CreateFlag.OVERWRITE) || !superGroupFileSystem.exists(f)) {
+ return superGroupFileSystem.createNonRecursive(f, permission, flag, bufferSize, replication, blockSize, progress);
+ } else if (!checkPathAccess(f, FsAction.WRITE, "createNonRecursive")) {
+ return superGroupFileSystem.createNonRecursive(f, permission, flag, bufferSize, replication, blockSize, progress);
+ }
+ }
statistics.incrementWriteOps(1);
int fd = lib.jfs_create(Thread.currentThread().getId(), handle, normalizePath(f), permission.toShort(), uMask.toShort());
while (fd == EEXIST) {
@@ -1312,6 +1424,9 @@ private FSDataOutputStream createFsDataOutputStream(Path f, int bufferSize, int
@Override
public FileChecksum getFileChecksum(Path f, long length) throws IOException {
+ if (needCheckPermission() && !checkPathAccess(f, FsAction.READ, "getFileChecksum")) {
+ return superGroupFileSystem.getFileChecksum(f, length);
+ }
statistics.incrementReadOps(1);
if (!fileChecksumEnabled)
return null;
@@ -1372,6 +1487,15 @@ public FileChecksum getFileChecksum(Path f, long length) throws IOException {
@Override
public void concat(final Path dst, final Path[] srcs) throws IOException {
+ if (needCheckPermission()) {
+ access(dst.getParent(), FsAction.WRITE);
+ access(dst, FsAction.WRITE);
+ for (Path src : srcs) {
+ access(src, FsAction.READ);
+ }
+ superGroupFileSystem.concat(dst, srcs);
+ return;
+ }
statistics.incrementWriteOps(1);
if (srcs.length == 0) {
throw new IllegalArgumentException("No sources given");
@@ -1412,6 +1536,15 @@ public void concat(final Path dst, final Path[] srcs) throws IOException {
@Override
public boolean rename(Path src, Path dst) throws IOException {
+ if (needCheckPermission()) {
+ if (!superGroupFileSystem.exists(src)) {
+ return false;
+ }
+ access(src.getParent(), FsAction.WRITE);
+ Path dstAncestor = rangerPermissionChecker.getAncestor(dst).getPath();
+ access(dstAncestor, FsAction.WRITE);
+ return superGroupFileSystem.rename(src, dst);
+ }
statistics.incrementWriteOps(1);
String srcStr = makeQualified(src).toUri().getPath();
String dstStr = makeQualified(dst).toUri().getPath();
@@ -1448,6 +1581,9 @@ public boolean rename(Path src, Path dst) throws IOException {
@Override
public boolean truncate(Path f, long newLength) throws IOException {
+ if (needCheckPermission() && !checkPathAccess(f, FsAction.WRITE, "truncate")) {
+ return superGroupFileSystem.truncate(f, newLength);
+ }
int r = lib.jfs_truncate(Thread.currentThread().getId(), handle, normalizePath(f), newLength);
if (r < 0)
throw error(r, f);
@@ -1467,6 +1603,17 @@ private boolean rmr(Path p) throws IOException {
@Override
public boolean delete(Path p, boolean recursive) throws IOException {
+ if (needCheckPermission()) {
+ try {
+ if (!checkParentPathAccess(p, FsAction.WRITE_EXECUTE, "delete")) {
+ return superGroupFileSystem.delete(p, recursive);
+ }
+ } catch (Exception e) {
+ if (!checkPathAccess(p, FsAction.WRITE_EXECUTE, "delete")) {
+ return superGroupFileSystem.delete(p, recursive);
+ }
+ }
+ }
statistics.incrementWriteOps(1);
if (recursive)
return rmr(p);
@@ -1482,6 +1629,9 @@ public boolean delete(Path p, boolean recursive) throws IOException {
@Override
public ContentSummary getContentSummary(Path f) throws IOException {
+ if (needCheckPermission() && !checkPathAccess(f, FsAction.READ_EXECUTE, "getContentSummary")) {
+ return superGroupFileSystem.getContentSummary(f);
+ }
statistics.incrementReadOps(1);
String path = normalizePath(f);
Pointer buf = Memory.allocate(Runtime.getRuntime(lib), 24);
@@ -1522,6 +1672,9 @@ private FileStatus newFileStatus(Path p, Pointer buf, int size, boolean readlink
@Override
public FileStatus[] listStatus(Path f) throws FileNotFoundException, IOException {
+ if (needCheckPermission() && !checkPathAccess(f, FsAction.READ_EXECUTE, "listStatus")) {
+ return superGroupFileSystem.listStatus(f);
+ }
statistics.incrementReadOps(1);
int bufsize = 32 << 10;
Pointer buf = Memory.allocate(Runtime.getRuntime(lib), bufsize); // TODO: smaller buff
@@ -1585,6 +1738,9 @@ public Path getWorkingDirectory() {
@Override
public boolean mkdirs(Path f, FsPermission permission) throws IOException {
+ if (needCheckPermission() && !checkAncestorAccess(f, FsAction.WRITE, "mkdirs")) {
+ return superGroupFileSystem.mkdirs(f, permission);
+ }
statistics.incrementWriteOps(1);
if (f == null) {
throw new IllegalArgumentException("mkdirs path arg is null");
@@ -1606,6 +1762,9 @@ public boolean mkdirs(Path f, FsPermission permission) throws IOException {
@Override
public FileStatus getFileStatus(Path f) throws IOException {
+ if (needCheckPermission() && !checkParentPathAccess(f, FsAction.EXECUTE, "getFileStatus")) {
+ return superGroupFileSystem.getFileStatus(f);
+ }
statistics.incrementReadOps(1);
try {
return getFileStatusInternal(f, true);
@@ -1651,6 +1810,9 @@ public String getCanonicalServiceName() {
@Override
public FsStatus getStatus(Path p) throws IOException {
+ if (needCheckPermission() && !checkParentPathAccess(p, FsAction.EXECUTE, "getStatus")) {
+ return superGroupFileSystem.getStatus(p);
+ }
statistics.incrementReadOps(1);
Pointer buf = Memory.allocate(Runtime.getRuntime(lib), 16);
int r = lib.jfs_statvfs(Thread.currentThread().getId(), handle, buf);
@@ -1663,6 +1825,10 @@ public FsStatus getStatus(Path p) throws IOException {
@Override
public void setPermission(Path p, FsPermission permission) throws IOException {
+ if (needCheckPermission() && !checkOwner(p, "setPermission")) {
+ superGroupFileSystem.setPermission(p, permission);
+ return;
+ }
statistics.incrementWriteOps(1);
int r = lib.jfs_chmod(Thread.currentThread().getId(), handle, normalizePath(p), permission.toShort());
if (r != 0)
@@ -1671,6 +1837,18 @@ public void setPermission(Path p, FsPermission permission) throws IOException {
@Override
public void setOwner(Path p, String username, String groupname) throws IOException {
+ if (needCheckPermission()) {
+ if (username == null) {
+ throw new AccessControlException(
+ "User can not be null");
+ }
+ if (!superuser.equals(username)) {
+ throw new AccessControlException(
+ "Only SuperUser can do setOwner Action, the current user is " + username);
+ }
+ superGroupFileSystem.setOwner(p, username, groupname);
+ return;
+ }
statistics.incrementWriteOps(1);
int r = lib.jfs_setOwner(Thread.currentThread().getId(), handle, normalizePath(p), username, groupname);
if (r != 0)
@@ -1679,9 +1857,13 @@ public void setOwner(Path p, String username, String groupname) throws IOExcepti
@Override
public void setTimes(Path p, long mtime, long atime) throws IOException {
+ if (needCheckPermission() && !checkPathAccess(p, FsAction.WRITE, "setTimes")) {
+ superGroupFileSystem.setTimes(p, mtime, atime);
+ return;
+ }
statistics.incrementWriteOps(1);
int r = lib.jfs_utime(Thread.currentThread().getId(), handle, normalizePath(p), mtime >= 0 ? mtime : -1,
- atime >= 0 ? atime : -1);
+ atime >= 0 ? atime : -1);
if (r != 0)
throw error(r, p);
}
@@ -1693,10 +1875,17 @@ public void close() throws IOException {
if (metricsEnable) {
JuiceFSInstrumentation.close();
}
+ if (rangerPermissionChecker != null) {
+ rangerPermissionChecker.cleanUp();
+ }
}
@Override
public void setXAttr(Path path, String name, byte[] value, EnumSet flag) throws IOException {
+ if (needCheckPermission() && !checkPathAccess(path, FsAction.WRITE, "setXAttr")) {
+ superGroupFileSystem.setXAttr(path, name, value, flag);
+ return;
+ }
Pointer buf = Memory.allocate(Runtime.getRuntime(lib), value.length);
buf.put(0, value, 0, value.length);
int mode = 0; // create or replace
@@ -1708,13 +1897,16 @@ public void setXAttr(Path path, String name, byte[] value, EnumSet
mode = 2;
}
int r = lib.jfs_setXattr(Thread.currentThread().getId(), handle, normalizePath(path), name, buf, value.length,
- mode);
+ mode);
if (r < 0)
throw error(r, path);
}
@Override
public byte[] getXAttr(Path path, String name) throws IOException {
+ if (needCheckPermission() && !checkPathAccess(path, FsAction.READ, "getXAttr")) {
+ return superGroupFileSystem.getXAttr(path, name);
+ }
Pointer buf;
int bufsize = 16 << 10;
int r;
@@ -1739,6 +1931,9 @@ public Map getXAttrs(Path path) throws IOException {
@Override
public Map getXAttrs(Path path, List names) throws IOException {
+ if (needCheckPermission() && !checkPathAccess(path, FsAction.READ, "getXAttrs")) {
+ return superGroupFileSystem.getXAttrs(path, names);
+ }
Map result = new HashMap();
for (String n : names) {
byte[] value = getXAttr(path, n);
@@ -1751,6 +1946,9 @@ public Map getXAttrs(Path path, List names) throws IOExc
@Override
public List listXAttrs(Path path) throws IOException {
+ if (needCheckPermission() && !checkPathAccess(path, FsAction.READ, "listXAttrs")) {
+ return superGroupFileSystem.listXAttrs(path);
+ }
Pointer buf;
int bufsize = 1024;
int r;
@@ -1777,6 +1975,10 @@ public List listXAttrs(Path path) throws IOException {
@Override
public void removeXAttr(Path path, String name) throws IOException {
+ if (needCheckPermission() && !checkPathAccess(path, FsAction.WRITE, "removeXAttr")) {
+ superGroupFileSystem.removeXAttr(path, name);
+ return;
+ }
int r = lib.jfs_removeXattr(Thread.currentThread().getId(), handle, normalizePath(path), name);
if (r == ENOATTR || r == ENODATA) {
throw new IOException("No matching attributes found for remove operation");
@@ -1787,6 +1989,10 @@ public void removeXAttr(Path path, String name) throws IOException {
@Override
public void modifyAclEntries(Path path, List aclSpec) throws IOException {
+ if (needCheckPermission() && !checkOwner(path, "modifyAclEntries")) {
+ superGroupFileSystem.modifyAclEntries(path, aclSpec);
+ return;
+ }
List existingEntries = getAllAclEntries(path);
List newAcl = AclTransformation.mergeAclEntries(existingEntries, aclSpec);
setAclInternal(path, newAcl);
@@ -1794,6 +2000,10 @@ public void modifyAclEntries(Path path, List aclSpec) throws IOExcepti
@Override
public void removeAclEntries(Path path, List aclSpec) throws IOException {
+ if (needCheckPermission() && !checkOwner(path, "removeAclEntries")) {
+ superGroupFileSystem.removeAclEntries(path, aclSpec);
+ return;
+ }
List existingEntries = getAllAclEntries(path);
List newAcl = AclTransformation.filterAclEntriesByAclSpec(existingEntries, aclSpec);
setAclInternal(path, newAcl);
@@ -1801,6 +2011,10 @@ public void removeAclEntries(Path path, List aclSpec) throws IOExcepti
@Override
public void setAcl(Path path, List aclSpec) throws IOException {
+ if (needCheckPermission() && !checkOwner(path, "setAcl")) {
+ superGroupFileSystem.setAcl(path, aclSpec);
+ return;
+ }
List existingEntries = getAllAclEntries(path);
List newAcl = AclTransformation.replaceAclEntries(existingEntries, aclSpec);
setAclInternal(path, newAcl);
@@ -1831,11 +2045,19 @@ private void removeAclInternal(Path path, AclEntryScope scope) throws IOExceptio
@Override
public void removeDefaultAcl(Path path) throws IOException {
+ if (needCheckPermission() && !checkOwner(path, "removeDefaultAcl")) {
+ superGroupFileSystem.removeDefaultAcl(path);
+ return;
+ }
removeAclInternal(path, AclEntryScope.DEFAULT);
}
@Override
public void removeAcl(Path path) throws IOException {
+ if (needCheckPermission() && !checkOwner(path, "removeAcl")) {
+ superGroupFileSystem.removeAcl(path);
+ return;
+ }
removeAclInternal(path, AclEntryScope.ACCESS);
removeAclInternal(path, AclEntryScope.DEFAULT);
}
@@ -1992,6 +2214,9 @@ private List getAclEntries(Path path) throws IOException {
@Override
public AclStatus getAclStatus(Path path) throws IOException {
+ if (needCheckPermission() && !checkOwner(path, "getAclStatus")) {
+ return superGroupFileSystem.getAclStatus(path);
+ }
FileStatus st = getFileStatus(path);
List entries = getAclEntries(path);
AclStatus.Builder builder = new AclStatus.Builder().owner(st.getOwner()).group(st.getGroup())
diff --git a/sdk/java/src/main/java/io/juicefs/permission/LockFileChecker.java b/sdk/java/src/main/java/io/juicefs/permission/LockFileChecker.java
new file mode 100644
index 000000000000..055eb73751b7
--- /dev/null
+++ b/sdk/java/src/main/java/io/juicefs/permission/LockFileChecker.java
@@ -0,0 +1,50 @@
+/*
+ * JuiceFS, Copyright 2024 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package io.juicefs.permission;
+
+import java.io.File;
+import java.io.IOException;
+
+public class LockFileChecker {
+
+ public static boolean checkAndCreateLockFile(String directoryPath) {
+ File directory = new File(directoryPath);
+
+ if (!directory.exists()) {
+ directory.mkdirs();
+ }
+
+ File lockFile = new File(directory, ".lock");
+
+ if (lockFile.exists()) {
+ return false;
+ } else {
+ try {
+ lockFile.createNewFile();
+ return true;
+ } catch (IOException e) {
+ throw new RuntimeException("ranger policies cache dir cannot created. ", e);
+ }
+ }
+ }
+
+ public static void cleanUp(String directoryPath) {
+ File directory = new File(directoryPath + ".lock");
+ directory.deleteOnExit();
+ }
+
+}
diff --git a/sdk/java/src/main/java/io/juicefs/permission/RangerConfig.java b/sdk/java/src/main/java/io/juicefs/permission/RangerConfig.java
new file mode 100644
index 000000000000..4259047953e9
--- /dev/null
+++ b/sdk/java/src/main/java/io/juicefs/permission/RangerConfig.java
@@ -0,0 +1,70 @@
+/*
+ * JuiceFS, Copyright 2024 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package io.juicefs.permission;
+
+public class RangerConfig {
+
+ public RangerConfig(String rangerRestUrl, String serviceName, String cacheDir, String pollIntervalMs) {
+ this.rangerRestUrl = rangerRestUrl;
+ this.serviceName = serviceName;
+ this.pollIntervalMs = pollIntervalMs;
+ this.cacheDir = cacheDir;
+ }
+
+ private String rangerRestUrl;
+
+ private String serviceName;
+
+ private String pollIntervalMs = "30000";
+
+ private String cacheDir;
+
+
+ public String getRangerRestUrl() {
+ return rangerRestUrl;
+ }
+
+ public void setRangerRestUrl(String rangerRestUrl) {
+ this.rangerRestUrl = rangerRestUrl;
+ }
+
+ public String getServiceName() {
+ return serviceName;
+ }
+
+ public void setServiceName(String serviceName) {
+ this.serviceName = serviceName;
+ }
+
+
+ public String getCacheDir() {
+ return cacheDir;
+ }
+
+ public void setCacheDir(String cacheDir) {
+ this.cacheDir = cacheDir;
+ }
+
+ public String getPollIntervalMs() {
+ return pollIntervalMs;
+ }
+
+ public void setPollIntervalMs(String pollIntervalMs) {
+ this.pollIntervalMs = pollIntervalMs;
+ }
+
+}
diff --git a/sdk/java/src/main/java/io/juicefs/permission/RangerJfsAccessRequest.java b/sdk/java/src/main/java/io/juicefs/permission/RangerJfsAccessRequest.java
new file mode 100644
index 000000000000..f90940d36a1f
--- /dev/null
+++ b/sdk/java/src/main/java/io/juicefs/permission/RangerJfsAccessRequest.java
@@ -0,0 +1,37 @@
+/*
+ * JuiceFS, Copyright 2024 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package io.juicefs.permission;
+
+import org.apache.ranger.plugin.policyengine.RangerAccessRequestImpl;
+
+import java.util.Date;
+import java.util.Set;
+
+class RangerJfsAccessRequest extends RangerAccessRequestImpl {
+
+ RangerJfsAccessRequest(String path, String pathOwner, String accessType, String action, String user,
+ Set groups) {
+ setResource(new RangerJfsResource(path, pathOwner));
+ setAccessType(accessType);
+ setUser(user);
+ setUserGroups(groups);
+ setAccessTime(new Date());
+ setAction(action);
+ setForwardedAddresses(null);
+ }
+
+}
diff --git a/sdk/java/src/main/java/io/juicefs/permission/RangerJfsResource.java b/sdk/java/src/main/java/io/juicefs/permission/RangerJfsResource.java
new file mode 100644
index 000000000000..bfbc96049b6a
--- /dev/null
+++ b/sdk/java/src/main/java/io/juicefs/permission/RangerJfsResource.java
@@ -0,0 +1,26 @@
+/*
+ * JuiceFS, Copyright 2024 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package io.juicefs.permission;
+
+import org.apache.ranger.plugin.policyengine.RangerAccessResourceImpl;
+
+class RangerJfsResource extends RangerAccessResourceImpl {
+ RangerJfsResource(String path, String owner) {
+ setValue("path", path);
+ setOwnerUser(owner);
+ }
+}
diff --git a/sdk/java/src/main/java/io/juicefs/permission/RangerPermissionChecker.java b/sdk/java/src/main/java/io/juicefs/permission/RangerPermissionChecker.java
new file mode 100644
index 000000000000..b205d7d3f5f2
--- /dev/null
+++ b/sdk/java/src/main/java/io/juicefs/permission/RangerPermissionChecker.java
@@ -0,0 +1,280 @@
+/*
+ * JuiceFS, Copyright 2024 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package io.juicefs.permission;
+
+import com.google.common.collect.Sets;
+import io.juicefs.JuiceFileSystemImpl;
+import org.apache.commons.lang.StringUtils;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.permission.FsAction;
+import org.apache.hadoop.security.AccessControlException;
+import org.apache.ranger.authorization.hadoop.config.RangerPluginConfig;
+import org.apache.ranger.plugin.policyengine.RangerAccessResult;
+import org.apache.ranger.plugin.policyengine.RangerPolicyEngineOptions;
+import org.apache.ranger.plugin.service.RangerBasePlugin;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.util.*;
+import java.util.stream.Collectors;
+
+/**
+ * for auth checker
+ *
+ * @author ming.li2
+ **/
+public class RangerPermissionChecker {
+
+ private static final Logger LOG = LoggerFactory.getLogger(RangerPermissionChecker.class);
+
+ private final HashMap> fsAction2ActionMapper = new HashMap>() {
+ {
+ put(FsAction.NONE, new HashSet<>());
+ put(FsAction.ALL, Sets.newHashSet("read", "write", "execute"));
+ put(FsAction.READ, Sets.newHashSet("read"));
+ put(FsAction.READ_WRITE, Sets.newHashSet("read", "write"));
+ put(FsAction.READ_EXECUTE, Sets.newHashSet("read", "execute"));
+ put(FsAction.WRITE, Sets.newHashSet("write"));
+ put(FsAction.WRITE_EXECUTE, Sets.newHashSet("write", "execute"));
+ put(FsAction.EXECUTE, Sets.newHashSet("execute"));
+ }
+ };
+
+ private final JuiceFileSystemImpl superGroupFileSystem;
+
+ private final String user;
+
+ private final Set groups;
+
+ private final String rangerCacheDir;
+
+ private final RangerBasePlugin rangerPlugin;
+
+ private static final String RANGER_SERVICE_TYPE = "hdfs";
+
+ public RangerPermissionChecker(JuiceFileSystemImpl superGroupFileSystem, RangerConfig config, String user, String group) {
+ this.superGroupFileSystem = superGroupFileSystem;
+ this.user = user;
+ this.groups = Arrays.stream(group.split(",")).collect(Collectors.toSet());
+
+ this.rangerCacheDir = config.getCacheDir();
+ boolean startRangerRefresher = LockFileChecker.checkAndCreateLockFile(rangerCacheDir);
+
+ RangerPluginConfig rangerPluginContext = buildRangerPluginContext(RANGER_SERVICE_TYPE, config.getServiceName(), startRangerRefresher);
+ rangerPlugin = new RangerBasePlugin(rangerPluginContext);
+ rangerPlugin.getConfig().set("ranger.plugin.hdfs.policy.cache.dir", this.rangerCacheDir);
+ rangerPlugin.getConfig().set("ranger.plugin.hdfs.service.name", config.getServiceName());
+ rangerPlugin.getConfig().set("ranger.plugin.hdfs.policy.rest.url", config.getRangerRestUrl());
+ rangerPlugin.init();
+ }
+
+ protected RangerPolicyEngineOptions buildRangerPolicyEngineOptions(boolean startRangerRefresher) {
+ if (startRangerRefresher) {
+ return null;
+ }
+ LOG.info("Other JuiceFS Client is refreshing ranger policy, will close the refresher here.");
+ RangerPolicyEngineOptions options = new RangerPolicyEngineOptions();
+ options.disablePolicyRefresher = true;
+ return options;
+ }
+
+ protected RangerPluginConfig buildRangerPluginContext(String serviceType, String serviceName, boolean startRangerRefresher) {
+ return new RangerPluginConfig(serviceType, serviceName, serviceName,
+ null, null, buildRangerPolicyEngineOptions(startRangerRefresher));
+ }
+
+ public boolean checkPermission(Path path, boolean checkOwner, FsAction ancestorAccess, FsAction parentAccess,
+ FsAction access, String operationName) throws IOException {
+ RangerPermissionContext context = new RangerPermissionContext(user, groups, operationName);
+ PathObj obj = path2Obj(path);
+
+ boolean fallback = true;
+ AuthzStatus authzStatus = AuthzStatus.ALLOW;
+
+ if (access != null && parentAccess != null
+ && parentAccess.implies(FsAction.WRITE) && obj.parent != null && obj.current != null && obj.parent.getPermission().getStickyBit()) {
+ if (!StringUtils.equals(obj.parent.getOwner(), user) && !StringUtils.equals(obj.current.getOwner(), user)) {
+ authzStatus = AuthzStatus.NOT_DETERMINED;
+ }
+ }
+
+ if (authzStatus == AuthzStatus.ALLOW && ancestorAccess != null && obj.ancestor != null) {
+ authzStatus = isAccessAllowed(obj.ancestor, ancestorAccess, context);
+ if (checkResult(authzStatus, user, ancestorAccess.toString(), toPathString(obj.ancestor.getPath()))) {
+ return fallback;
+ }
+ }
+
+ if (authzStatus == AuthzStatus.ALLOW && parentAccess != null && obj.parent != null) {
+ authzStatus = isAccessAllowed(obj.parent, parentAccess, context);
+ if (checkResult(authzStatus, user, parentAccess.toString(), toPathString(obj.parent.getPath()))) {
+ return fallback;
+ }
+ }
+
+ if (authzStatus == AuthzStatus.ALLOW && access != null && obj.current != null) {
+ authzStatus = isAccessAllowed(obj.current, access, context);
+ if (checkResult(authzStatus, user, access.toString(), toPathString(obj.current.getPath()))) {
+ return fallback;
+ }
+ }
+
+ if (checkOwner) {
+ String owner = null;
+ if (obj.current != null) {
+ owner = obj.current.getOwner();
+ }
+ if (!user.equals(owner)) {
+ throw new AccessControlException(
+ assembleExceptionMessage(user, getFirstNonNullAccess(ancestorAccess, parentAccess, access),
+ toPathString(obj.current.getPath())));
+ }
+ }
+ // check access by ranger success
+ return !fallback;
+ }
+
+ public void cleanUp() {
+ try {
+ rangerPlugin.cleanup();
+ } catch (Exception e) {
+ LOG.warn("Error when clean up ranger plugin threads.", e);
+ }
+ LockFileChecker.cleanUp(rangerCacheDir);
+ }
+
+ private static boolean checkResult(AuthzStatus authzStatus, String user, String action, String path) throws AccessControlException {
+ if (authzStatus == AuthzStatus.DENY) {
+ throw new AccessControlException(assembleExceptionMessage(user, action, path));
+ } else {
+ return authzStatus == AuthzStatus.NOT_DETERMINED;
+ }
+ }
+
+ private static String assembleExceptionMessage(String user, String action, String path) {
+ return "Permission denied: user=" + user + ", access=" + action + ", path=\"" + path + "\"";
+ }
+
+ private static String getFirstNonNullAccess(FsAction ancestorAccess, FsAction parentAccess, FsAction access) {
+ if (access != null) {
+ return access.toString();
+ }
+ if (parentAccess != null) {
+ return parentAccess.toString();
+ }
+ if (ancestorAccess != null) {
+ return ancestorAccess.toString();
+ }
+ return FsAction.EXECUTE.toString();
+ }
+
+ private AuthzStatus isAccessAllowed(FileStatus file, FsAction access, RangerPermissionContext context) {
+ String path = toPathString(file.getPath());
+ Set accessTypes = fsAction2ActionMapper.getOrDefault(access, new HashSet<>());
+ String pathOwner = file.getOwner();
+ AuthzStatus authzStatus = null;
+ for (String accessType : accessTypes) {
+ RangerJfsAccessRequest request = new RangerJfsAccessRequest(path, pathOwner, accessType, context.operationName, user, context.userGroups);
+ LOG.debug(request.toString());
+
+ RangerAccessResult result = null;
+ try {
+ result = rangerPlugin.isAccessAllowed(request);
+ if (result != null) {
+ LOG.debug(result.toString());
+ }
+ } catch (Throwable e) {
+ throw new RuntimeException("Check Permission Error. ", e);
+ }
+
+ if (result == null || !result.getIsAccessDetermined()) {
+ authzStatus = AuthzStatus.NOT_DETERMINED;
+ } else if (!result.getIsAllowed()) {
+ authzStatus = AuthzStatus.DENY;
+ break;
+ } else {
+ if (!AuthzStatus.NOT_DETERMINED.equals(authzStatus)) {
+ authzStatus = AuthzStatus.ALLOW;
+ }
+ }
+
+ }
+ if (authzStatus == null) {
+ authzStatus = AuthzStatus.NOT_DETERMINED;
+ }
+ return authzStatus;
+ }
+
+ private enum AuthzStatus {ALLOW, DENY, NOT_DETERMINED}
+
+ ;
+
+ private static String toPathString(Path path) {
+ return path.toUri().getPath();
+ }
+
+ private PathObj path2Obj(Path path) throws IOException {
+
+ FileStatus current = getIfExist(path);
+ FileStatus parent = getIfExist(path.getParent());
+ FileStatus ancestor = getAncestor(path);
+
+ return new PathObj(ancestor, parent, current);
+ }
+
+ private FileStatus getIfExist(Path path) throws IOException {
+ try {
+ if (path != null) {
+ return superGroupFileSystem.getFileStatus(path);
+ }
+ } catch (FileNotFoundException ignored) {
+ }
+ return null;
+ }
+
+ public FileStatus getAncestor(Path path) throws IOException {
+ if (path.getParent() != null) {
+ return getIfExist(path.getParent());
+ }
+ path = path.getParent();
+ FileStatus tmp = null;
+ while (path != null && tmp == null) {
+ tmp = getIfExist(path);
+ path = path.getParent();
+ }
+ return tmp;
+ }
+
+ public static class PathObj {
+
+ FileStatus ancestor = null;
+
+ FileStatus parent = null;
+
+ FileStatus current = null;
+
+ public PathObj(FileStatus ancestor, FileStatus parent, FileStatus current) {
+ this.ancestor = ancestor;
+ this.parent = parent;
+ this.current = current;
+ }
+ }
+
+}
diff --git a/sdk/java/src/main/java/io/juicefs/permission/RangerPermissionContext.java b/sdk/java/src/main/java/io/juicefs/permission/RangerPermissionContext.java
new file mode 100644
index 000000000000..122deff6e88d
--- /dev/null
+++ b/sdk/java/src/main/java/io/juicefs/permission/RangerPermissionContext.java
@@ -0,0 +1,35 @@
+/*
+ * JuiceFS, Copyright 2024 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package io.juicefs.permission;
+
+import java.util.Set;
+
+public class RangerPermissionContext {
+
+ public final String user;
+
+ public final Set userGroups;
+
+ public final String operationName;
+
+ public RangerPermissionContext(String user, Set groups, String operationName) {
+ this.user = user;
+ this.userGroups = groups;
+ this.operationName = operationName;
+ }
+
+}
diff --git a/sdk/java/src/main/resources/ranger-hdfs-audit.xml b/sdk/java/src/main/resources/ranger-hdfs-audit.xml
new file mode 100644
index 000000000000..24856dcb7f76
--- /dev/null
+++ b/sdk/java/src/main/resources/ranger-hdfs-audit.xml
@@ -0,0 +1,23 @@
+
+
+
+
+
+ xasecure.audit.is.enabled
+ false
+
+
\ No newline at end of file
diff --git a/sdk/java/src/main/resources/ranger-hdfs-security.xml b/sdk/java/src/main/resources/ranger-hdfs-security.xml
new file mode 100644
index 000000000000..2ef9a06309d9
--- /dev/null
+++ b/sdk/java/src/main/resources/ranger-hdfs-security.xml
@@ -0,0 +1,83 @@
+
+
+
+
+
+ ranger.plugin.hdfs.service.name
+ xxx
+
+ Name of the Ranger service containing policies for this YARN instance
+
+
+
+
+ ranger.plugin.hdfs.policy.source.impl
+ org.apache.ranger.admin.client.RangerAdminRESTClient
+
+ Class to retrieve policies from the source
+
+
+
+
+ ranger.plugin.hdfs.policy.rest.url
+ xxx
+
+ URL to Ranger Admin
+
+
+
+
+ ranger.plugin.hdfs.policy.pollIntervalMs
+ 30000
+
+ How often to poll for changes in policies?
+
+
+
+
+ ranger.plugin.hdfs.policy.cache.dir
+ xxx
+
+ Directory where Ranger policies are cached after successful retrieval from the source
+
+
+
+
+ ranger.plugin.hdfs.policy.rest.client.connection.timeoutMs
+ 120000
+
+ Hdfs Plugin RangerRestClient Connection Timeout in Milli Seconds
+
+
+
+
+ ranger.plugin.hdfs.policy.rest.client.read.timeoutMs
+ 30000
+
+ Hdfs Plugin RangerRestClient read Timeout in Milli Seconds
+
+
+
+
+ xasecure.add-hadoop-authorization
+ true
+
+ Enable/Disable the default hadoop authorization (based on
+ rwxrwxrwx permission on the resource) if Ranger Authorization fails.
+
+
+
\ No newline at end of file
diff --git a/sdk/java/src/test/java/io/juicefs/permission/RangerAdminClientImpl.java b/sdk/java/src/test/java/io/juicefs/permission/RangerAdminClientImpl.java
new file mode 100644
index 000000000000..a3c7ee984ef8
--- /dev/null
+++ b/sdk/java/src/test/java/io/juicefs/permission/RangerAdminClientImpl.java
@@ -0,0 +1,69 @@
+/*
+ * JuiceFS, Copyright 2024 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package io.juicefs.permission;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.ranger.admin.client.AbstractRangerAdminClient;
+import org.apache.ranger.plugin.util.ServicePolicies;
+import org.apache.ranger.plugin.util.ServiceTags;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.nio.file.FileSystems;
+import java.nio.file.Files;
+import java.util.List;
+
+public class RangerAdminClientImpl extends AbstractRangerAdminClient {
+
+ private static final Logger LOG = LoggerFactory.getLogger(RangerAdminClientImpl.class);
+
+ private final static String cacheFilename = "hdfs-policies.json";
+ private final static String tagFilename = "hdfs-policies-tag.json";
+ public void init(String serviceName, String appId, String configPropertyPrefix, Configuration config) {
+ super.init(serviceName, appId, configPropertyPrefix, config);
+ }
+
+ public ServicePolicies getServicePoliciesIfUpdated(long lastKnownVersion, long lastActivationTimeInMillis) throws Exception {
+
+ String basedir = System.getProperty("basedir");
+ if (basedir == null) {
+ basedir = new File(".").getCanonicalPath();
+ }
+ final String relativePath = "/src/test/resources/";
+ java.nio.file.Path cachePath = FileSystems.getDefault().getPath(basedir, relativePath + cacheFilename);
+ byte[] cacheBytes = Files.readAllBytes(cachePath);
+ return gson.fromJson(new String(cacheBytes), ServicePolicies.class);
+ }
+
+ public ServiceTags getServiceTagsIfUpdated(long lastKnownVersion, long lastActivationTimeInMillis) throws Exception {
+ String basedir = System.getProperty("basedir");
+ if (basedir == null) {
+ basedir = new File(".").getCanonicalPath();
+ }
+ final String relativePath = "/src/test/resources/";
+ java.nio.file.Path cachePath = FileSystems.getDefault().getPath(basedir, relativePath + tagFilename);
+ byte[] cacheBytes = Files.readAllBytes(cachePath);
+ return gson.fromJson(new String(cacheBytes), ServiceTags.class);
+ }
+
+ public List getTagTypes(String tagTypePattern) throws Exception {
+ return null;
+ }
+
+
+}
diff --git a/sdk/java/src/test/java/io/juicefs/permission/RangerPermissionCheckerTest.java b/sdk/java/src/test/java/io/juicefs/permission/RangerPermissionCheckerTest.java
new file mode 100644
index 000000000000..4dac849ea8e2
--- /dev/null
+++ b/sdk/java/src/test/java/io/juicefs/permission/RangerPermissionCheckerTest.java
@@ -0,0 +1,496 @@
+/*
+ * JuiceFS, Copyright 2024 Juicedata, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+package io.juicefs.permission;
+
+import io.juicefs.JuiceFileSystemTest;
+import junit.framework.TestCase;
+import org.apache.commons.io.IOUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.*;
+import org.apache.hadoop.fs.permission.FsAction;
+import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.security.AccessControlException;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.junit.Assert;
+
+import java.io.ByteArrayOutputStream;
+import java.security.PrivilegedExceptionAction;
+
+public class RangerPermissionCheckerTest extends TestCase {
+
+ private FileSystem fs;
+ private Configuration cfg;
+
+ public void setUp() throws Exception {
+ cfg = new Configuration();
+ cfg.addResource(JuiceFileSystemTest.class.getClassLoader().getResourceAsStream("core-site.xml"));
+ cfg.set("juicefs.ranger-rest-url", "http://localhost");
+ cfg.set("juicefs.ranger-service-name", "cl1_hadoop");
+ // set superuser
+ cfg.set("juicefs.superuser", UserGroupInformation.getCurrentUser().getShortUserName());
+ fs = FileSystem.newInstance(cfg);
+ cfg.setQuietMode(false);
+ }
+
+ public void tearDown() throws Exception {
+ fs.close();
+ }
+
+ public void testRangerCheckerInitFailed() throws Exception {
+ Configuration cfg1 = new Configuration();
+ cfg1.addResource(JuiceFileSystemTest.class.getClassLoader().getResourceAsStream("core-site.xml"));
+ cfg1.set("juicefs.superuser", UserGroupInformation.getCurrentUser().getShortUserName());
+ cfg1.setQuietMode(false);
+
+ FileSystem fs1 = FileSystem.newInstance(cfg1);
+
+ final Path file = new Path("/tmp/tmpdir/data-file2");
+ FSDataOutputStream out = fs1.create(file);
+ for (int i = 0; i < 1024; ++i) {
+ out.write(("data" + i + "\n").getBytes("UTF-8"));
+ out.flush();
+ }
+ out.close();
+
+ fs1.setPermission(file, new FsPermission(FsAction.READ_WRITE, FsAction.READ, FsAction.NONE));
+
+ // Now try to read the file as unknown user "bob" - ranger should allow this user, but now should not be allowed
+ UserGroupInformation ugi = UserGroupInformation.createUserForTesting("bob", new String[]{});
+ ugi.doAs(new PrivilegedExceptionAction() {
+ public Void run() throws Exception {
+ FileSystem fs = FileSystem.get(cfg1);
+ try {
+ fs.open(file);
+ Assert.fail("Failure expected on an incorrect permission");
+ } catch (AccessControlException ex) {
+ Assert.assertTrue(AccessControlException.class.getName().equals(ex.getClass().getName()));
+ }
+
+ fs.close();
+ return null;
+ }
+ });
+
+ fs1.delete(file);
+ fs1.close();
+ }
+
+ public void testRead() throws Exception {
+ HDFSReadTest("/tmp/tmpdir/data-file2");
+ }
+
+ public void testWrite() throws Exception {
+
+ // Write a file - the AccessControlEnforcer won't be invoked as we are the "superuser"
+ final Path file = new Path("/tmp/tmpdir2/data-file3");
+ FSDataOutputStream out = fs.create(file);
+ for (int i = 0; i < 1024; ++i) {
+ out.write(("data" + i + "\n").getBytes("UTF-8"));
+ out.flush();
+ }
+ out.close();
+
+ fs.setPermission(file, new FsPermission(FsAction.READ_WRITE, FsAction.READ_WRITE, FsAction.NONE));
+
+ // Now try to write to the file as "bob" - this should be allowed (by the policy - user)
+ UserGroupInformation ugi = UserGroupInformation.createUserForTesting("bob", new String[]{});
+ ugi.doAs(new PrivilegedExceptionAction() {
+ public Void run() throws Exception {
+ FileSystem fs = FileSystem.get(cfg);
+ // Write to the file
+ fs.append(file);
+ fs.close();
+ return null;
+ }
+ });
+
+ // Now try to write to the file as "alice" - this should be allowed (by the policy - group)
+ ugi = UserGroupInformation.createUserForTesting("alice", new String[]{"IT"});
+ ugi.doAs(new PrivilegedExceptionAction() {
+ public Void run() throws Exception {
+ FileSystem fs = FileSystem.get(cfg);
+ // Write to the file
+ fs.append(file);
+ fs.close();
+ return null;
+ }
+ });
+
+ // Now try to read the file as unknown user "eve" - this should not be allowed
+ ugi = UserGroupInformation.createUserForTesting("eve", new String[]{});
+ ugi.doAs(new PrivilegedExceptionAction() {
+
+ public Void run() throws Exception {
+ FileSystem fs = FileSystem.get(cfg);
+ // Write to the file
+ try {
+ fs.append(file);
+ Assert.fail("Failure expected on an incorrect permission");
+ } catch (AccessControlException ex) {
+ // expected
+ Assert.assertTrue(AccessControlException.class.getName().equals(ex.getClass().getName()));
+ }
+ fs.close();
+ return null;
+ }
+ });
+
+ fs.delete(file);
+ }
+
+ public void testExecute() throws Exception {
+
+ // Write a file - the AccessControlEnforcer won't be invoked as we are the "superuser"
+ final Path file = new Path("/tmp/tmpdir3/data-file2");
+ FSDataOutputStream out = fs.create(file);
+ for (int i = 0; i < 1024; ++i) {
+ out.write(("data" + i + "\n").getBytes("UTF-8"));
+ out.flush();
+ }
+ out.close();
+
+ fs.setPermission(file, new FsPermission(FsAction.READ_WRITE, FsAction.READ, FsAction.NONE));
+
+ Path parentDir = new Path("/tmp/tmpdir3");
+
+ fs.setPermission(parentDir, new FsPermission(FsAction.ALL, FsAction.READ_EXECUTE, FsAction.NONE));
+
+
+ // Try to read the directory as "bob" - this should be allowed (by the policy - user)
+ UserGroupInformation ugi = UserGroupInformation.createUserForTesting("bob", new String[]{});
+ ugi.doAs(new PrivilegedExceptionAction() {
+ public Void run() throws Exception {
+ FileSystem fs = FileSystem.get(cfg);
+ RemoteIterator iter = fs.listFiles(file.getParent(), false);
+ Assert.assertTrue(iter.hasNext());
+
+ fs.close();
+ return null;
+ }
+ });
+ // Try to read the directory as "alice" - this should be allowed (by the policy - group)
+ ugi = UserGroupInformation.createUserForTesting("alice", new String[]{"IT"});
+ ugi.doAs(new PrivilegedExceptionAction() {
+ public Void run() throws Exception {
+ FileSystem fs = FileSystem.get(cfg);
+ RemoteIterator iter = fs.listFiles(file.getParent(), false);
+ Assert.assertTrue(iter.hasNext());
+ fs.close();
+ return null;
+ }
+ });
+
+ // Now try to read the directory as unknown user "eve" - this should not be allowed
+ ugi = UserGroupInformation.createUserForTesting("eve", new String[]{});
+ ugi.doAs(new PrivilegedExceptionAction() {
+
+ public Void run() throws Exception {
+ FileSystem fs = FileSystem.get(cfg);
+ try {
+ RemoteIterator iter = fs.listFiles(file.getParent(), false);
+ Assert.assertTrue(iter.hasNext());
+ Assert.fail("Failure expected on an incorrect permission");
+ } catch (AccessControlException ex) {
+ Assert.assertTrue(AccessControlException.class.getName().equals(ex.getClass().getName()));
+ }
+
+ fs.close();
+ return null;
+ }
+ });
+
+ fs.delete(file);
+ fs.delete(parentDir);
+ }
+
+ public void testSetPermission() throws Exception {
+
+ // Write a file - the AccessControlEnforcer won't be invoked as we are the "superuser"
+ final Path file = new Path("/tmp/tmpdir123/data-file3");
+ FSDataOutputStream out = fs.create(file);
+ for (int i = 0; i < 1024; ++i) {
+ out.write(("data" + i + "\n").getBytes("UTF-8"));
+ out.flush();
+ }
+ out.close();
+
+ // Now try to read the file as unknown user "eve" - this will not find in ranger, and fallback check by origin Mask which should fail
+ UserGroupInformation ugi = UserGroupInformation.createUserForTesting("eve", new String[]{});
+ ugi.doAs(new PrivilegedExceptionAction() {
+ public Void run() throws Exception {
+ FileSystem fs = FileSystem.get(cfg);
+ // Write to the file
+ try {
+ fs.setPermission(file, new FsPermission(FsAction.READ, FsAction.NONE, FsAction.NONE));
+ Assert.fail("Failure expected on an incorrect permission");
+ } catch (AccessControlException ex) {
+ // expected
+ Assert.assertTrue(AccessControlException.class.getName().equals(ex.getClass().getName()));
+ }
+ fs.close();
+ return null;
+ }
+ });
+
+ fs.delete(file);
+ }
+
+ public void testSetOwner() throws Exception {
+
+ // Write a file - the AccessControlEnforcer won't be invoked as we are the "superuser"
+ final Path file = new Path("/tmp/tmpdir123/data-file3");
+ FSDataOutputStream out = fs.create(file);
+ for (int i = 0; i < 1024; ++i) {
+ out.write(("data" + i + "\n").getBytes("UTF-8"));
+ out.flush();
+ }
+ out.close();
+
+ // Now try to read the file as unknown user "eve" - this will not find in ranger, and fallback check by origin Mask which should fail
+ UserGroupInformation ugi = UserGroupInformation.createUserForTesting("eve", new String[]{});
+ ugi.doAs(new PrivilegedExceptionAction() {
+ public Void run() throws Exception {
+ FileSystem fs = FileSystem.get(cfg);
+ // Write to the file
+ try {
+ fs.setOwner(file, "eve", "eve");
+ Assert.fail("Failure expected on an incorrect permission");
+ } catch (AccessControlException ex) {
+ // expected
+ Assert.assertTrue(AccessControlException.class.getName().equals(ex.getClass().getName()));
+ }
+ fs.close();
+ return null;
+ }
+ });
+
+ fs.delete(file);
+ }
+
+ public void testReadTestUsingTagPolicy() throws Exception {
+
+ // Write a file - the AccessControlEnforcer won't be invoked as we are the "superuser"
+ final Path file = new Path("/tmp/tmpdir6/data-file2");
+ FSDataOutputStream out = fs.create(file);
+ for (int i = 0; i < 1024; ++i) {
+ out.write(("data" + i + "\n").getBytes("UTF-8"));
+ out.flush();
+ }
+ out.close();
+
+ fs.setPermission(file, new FsPermission(FsAction.READ_WRITE, FsAction.READ, FsAction.NONE));
+
+ // Now try to read the file as "bob" - this should be allowed (by the policy - user)
+ UserGroupInformation ugi = UserGroupInformation.createUserForTesting("bob", new String[]{});
+ ugi.doAs(new PrivilegedExceptionAction() {
+ public Void run() throws Exception {
+ FileSystem fs = FileSystem.get(cfg);
+ // Read the file
+ FSDataInputStream in = fs.open(file);
+ ByteArrayOutputStream output = new ByteArrayOutputStream();
+ IOUtils.copy(in, output);
+ String content = new String(output.toByteArray());
+ Assert.assertTrue(content.startsWith("data0"));
+ fs.close();
+ return null;
+ }
+ });
+
+ // Now try to read the file as "alice" - this should be allowed (by the policy - group)
+ ugi = UserGroupInformation.createUserForTesting("alice", new String[]{"IT"});
+ ugi.doAs(new PrivilegedExceptionAction() {
+ public Void run() throws Exception {
+ FileSystem fs = FileSystem.get(cfg);
+ // Read the file
+ FSDataInputStream in = fs.open(file);
+ ByteArrayOutputStream output = new ByteArrayOutputStream();
+ IOUtils.copy(in, output);
+ String content = new String(output.toByteArray());
+ Assert.assertTrue(content.startsWith("data0"));
+
+ fs.close();
+ return null;
+ }
+ });
+
+ // Now try to read the file as unknown user "eve" - this should not be allowed
+ ugi = UserGroupInformation.createUserForTesting("eve", new String[]{});
+ ugi.doAs(new PrivilegedExceptionAction() {
+
+ public Void run() throws Exception {
+ FileSystem fs = FileSystem.get(cfg);
+ // Read the file
+ try {
+ fs.open(file);
+ Assert.fail("Failure expected on an incorrect permission");
+ } catch (AccessControlException ex) {
+ // expected
+ Assert.assertTrue(AccessControlException.class.getName().equals(ex.getClass().getName()));
+ }
+ fs.close();
+ return null;
+ }
+ });
+
+ // Now try to read the file as known user "dave" - this should not be allowed, as he doesn't have the correct permissions
+ ugi = UserGroupInformation.createUserForTesting("dave", new String[]{});
+ ugi.doAs(new PrivilegedExceptionAction() {
+
+ public Void run() throws Exception {
+ FileSystem fs = FileSystem.get(cfg);
+
+ // Read the file
+ try {
+ fs.open(file);
+ Assert.fail("Failure expected on an incorrect permission");
+ } catch (AccessControlException ex) {
+ // expected
+ Assert.assertTrue(AccessControlException.class.getName().equals(ex.getClass().getName()));
+ }
+
+ fs.close();
+ return null;
+ }
+ });
+
+ fs.delete(file);
+ }
+
+ public void testHDFSContentSummary() throws Exception {
+ HDFSGetContentSummary("/tmp/get-content-summary");
+ }
+
+ void HDFSReadTest(String fileName) throws Exception {
+
+ // Write a file - the AccessControlEnforcer won't be invoked as we are the "superuser"
+ final Path file = new Path(fileName);
+ FSDataOutputStream out = fs.create(file);
+ for (int i = 0; i < 1024; ++i) {
+ out.write(("data" + i + "\n").getBytes("UTF-8"));
+ out.flush();
+ }
+ out.close();
+
+ fs.setPermission(file, new FsPermission(FsAction.READ_WRITE, FsAction.READ, FsAction.NONE));
+
+ // Now try to read the file as "bob" - this should be allowed (by the policy - user)
+ UserGroupInformation ugi = UserGroupInformation.createUserForTesting("bob", new String[]{});
+ ugi.doAs(new PrivilegedExceptionAction() {
+ public Void run() throws Exception {
+ FileSystem fs = FileSystem.get(cfg);
+ // Read the file
+ FSDataInputStream in = fs.open(file);
+ ByteArrayOutputStream output = new ByteArrayOutputStream();
+ IOUtils.copy(in, output);
+ String content = new String(output.toByteArray());
+ Assert.assertTrue(content.startsWith("data0"));
+
+ fs.close();
+ return null;
+ }
+ });
+
+ // Now try to read the file as "alice" - this should be allowed (by the policy - group)
+ ugi = UserGroupInformation.createUserForTesting("alice", new String[]{"IT"});
+ ugi.doAs(new PrivilegedExceptionAction() {
+
+ public Void run() throws Exception {
+ FileSystem fs = FileSystem.get(cfg);
+ FSDataInputStream in = fs.open(file);
+ ByteArrayOutputStream output = new ByteArrayOutputStream();
+ IOUtils.copy(in, output);
+ String content = new String(output.toByteArray());
+ Assert.assertTrue(content.startsWith("data0"));
+ fs.close();
+ return null;
+ }
+ });
+
+ // Now try to read the file as unknown user "eve" - this should not be allowed
+ ugi = UserGroupInformation.createUserForTesting("eve", new String[]{});
+ ugi.doAs(new PrivilegedExceptionAction() {
+ public Void run() throws Exception {
+ FileSystem fs = FileSystem.get(cfg);
+ try {
+ fs.open(file);
+ Assert.fail("Failure expected on an incorrect permission");
+ } catch (AccessControlException ex) {
+ Assert.assertTrue(AccessControlException.class.getName().equals(ex.getClass().getName()));
+ }
+
+ fs.close();
+ return null;
+ }
+ });
+
+ fs.delete(file);
+ }
+
+ void HDFSGetContentSummary(final String dirName) throws Exception {
+
+ String subdirName = dirName + "/tmpdir";
+
+ createFile(subdirName, 1);
+ createFile(subdirName, 2);
+
+ fs.setPermission(new Path(dirName), new FsPermission(FsAction.READ_WRITE, FsAction.READ, FsAction.NONE));
+
+ UserGroupInformation ugi = UserGroupInformation.createUserForTesting("bob", new String[]{});
+ ugi.doAs(new PrivilegedExceptionAction() {
+
+ public Void run() throws Exception {
+ FileSystem fs = FileSystem.get(cfg);
+ try {
+ // GetContentSummary on the directory dirName
+ ContentSummary contentSummary = fs.getContentSummary(new Path(dirName));
+
+ long directoryCount = contentSummary.getDirectoryCount();
+ Assert.assertTrue("Found unexpected number of directories; expected-count=3, actual-count=" + directoryCount, directoryCount == 3);
+ } catch (Exception e) {
+ Assert.fail("Failed to getContentSummary, exception=" + e);
+ }
+ fs.close();
+ return null;
+ }
+ });
+
+ deleteFile(subdirName, 1);
+ deleteFile(subdirName, 2);
+ }
+
+ void createFile(String baseDir, Integer index) throws Exception {
+ // Write a file - the AccessControlEnforcer won't be invoked as we are the "superuser"
+ String dirName = baseDir + (index != null ? String.valueOf(index) : "");
+ String fileName = dirName + "/dummy-data";
+ final Path file = new Path(fileName);
+ FSDataOutputStream out = fs.create(file);
+ for (int i = 0; i < 1024; ++i) {
+ out.write(("data" + i + "\n").getBytes("UTF-8"));
+ out.flush();
+ }
+ out.close();
+ }
+
+ void deleteFile(String baseDir, Integer index) throws Exception {
+ // Write a file - the AccessControlEnforcer won't be invoked as we are the "superuser"
+ String dirName = baseDir + (index != null ? String.valueOf(index) : "");
+ String fileName = dirName + "/dummy-data";
+ final Path file = new Path(fileName);
+ fs.delete(file);
+ }
+}
diff --git a/sdk/java/src/test/resources/hdfs-policies-tag.json b/sdk/java/src/test/resources/hdfs-policies-tag.json
new file mode 100644
index 000000000000..313d9e78579e
--- /dev/null
+++ b/sdk/java/src/test/resources/hdfs-policies-tag.json
@@ -0,0 +1,37 @@
+{
+ "op": "add_or_update",
+ "serviceName": "cl1_hadoop",
+ "tagVersion": 2,
+ "tagDefinitions": {},
+ "tags": {
+ "2": {
+ "type": "TmpdirTag",
+ "owner": 0,
+ "attributes": {},
+ "id": 2,
+ "isEnabled": true,
+ "version": 1
+ }
+ },
+ "serviceResources": [
+ {
+ "resourceElements": {
+ "path": {
+ "values": [
+ "/tmp/tmpdir6"
+ ],
+ "isExcludes": false,
+ "isRecursive": true
+ }
+ },
+ "id": 2,
+ "isEnabled": true,
+ "version": 2
+ }
+ ],
+ "resourceToTagIds": {
+ "2": [
+ 2
+ ]
+ }
+}
\ No newline at end of file
diff --git a/sdk/java/src/test/resources/hdfs-policies.json b/sdk/java/src/test/resources/hdfs-policies.json
new file mode 100644
index 000000000000..15fa157e7805
--- /dev/null
+++ b/sdk/java/src/test/resources/hdfs-policies.json
@@ -0,0 +1,1252 @@
+{
+ "serviceName": "cl1_hadoop",
+ "serviceId": 6,
+ "policyVersion": 7,
+ "policyUpdateTime": "20170220-12:36:01.000-+0000",
+ "policies": [
+ {
+ "service": "cl1_hadoop",
+ "name": "/tmp/tmpdir",
+ "policyType": 0,
+ "policyPriority": 0,
+ "description": "",
+ "isAuditEnabled": false,
+ "resources": {
+ "path": {
+ "values": [
+ "/tmp/tmpdir/"
+ ],
+ "isExcludes": false,
+ "isRecursive": true
+ }
+ },
+ "policyItems": [
+ {
+ "accesses": [
+ {
+ "type": "read",
+ "isAllowed": true
+ }
+ ],
+ "users": [],
+ "groups": [
+ "IT"
+ ],
+ "roles": [],
+ "conditions": [],
+ "delegateAdmin": false
+ },
+ {
+ "accesses": [
+ {
+ "type": "read",
+ "isAllowed": true
+ }
+ ],
+ "users": [
+ "bob"
+ ],
+ "groups": [],
+ "roles": [],
+ "conditions": [],
+ "delegateAdmin": false
+ }
+ ],
+ "denyPolicyItems": [],
+ "allowExceptions": [],
+ "denyExceptions": [],
+ "dataMaskPolicyItems": [],
+ "rowFilterPolicyItems": [],
+ "serviceType": "hdfs",
+ "id": 14,
+ "isEnabled": true,
+ "version": 4
+ },
+ {
+ "service": "cl1_hadoop",
+ "name": "/tmp/tmpdir2",
+ "policyType": 0,
+ "description": "",
+ "isAuditEnabled": true,
+ "resources": {
+ "path": {
+ "values": [
+ "/tmp/tmpdir2"
+ ],
+ "isExcludes": false,
+ "isRecursive": true
+ }
+ },
+ "policyItems": [
+ {
+ "accesses": [
+ {
+ "type": "write",
+ "isAllowed": true
+ }
+ ],
+ "users": [],
+ "groups": [
+ "IT"
+ ],
+ "conditions": [],
+ "delegateAdmin": false
+ },
+ {
+ "accesses": [
+ {
+ "type": "write",
+ "isAllowed": true
+ }
+ ],
+ "users": [
+ "bob"
+ ],
+ "groups": [],
+ "conditions": [],
+ "delegateAdmin": false
+ }
+ ],
+ "denyPolicyItems": [],
+ "allowExceptions": [],
+ "denyExceptions": [],
+ "dataMaskPolicyItems": [],
+ "rowFilterPolicyItems": [],
+ "id": 15,
+ "isEnabled": true,
+ "version": 1
+ },
+ {
+ "service": "cl1_hadoop",
+ "name": "/tmp/tmpdir3",
+ "policyType": 0,
+ "description": "",
+ "isAuditEnabled": true,
+ "resources": {
+ "path": {
+ "values": [
+ "/tmp/tmpdir3"
+ ],
+ "isExcludes": false,
+ "isRecursive": true
+ }
+ },
+ "policyItems": [
+ {
+ "accesses": [
+ {
+ "type": "read",
+ "isAllowed": true
+ },
+ {
+ "type": "execute",
+ "isAllowed": true
+ }
+ ],
+ "users": [],
+ "groups": [
+ "IT"
+ ],
+ "conditions": [],
+ "delegateAdmin": false
+ },
+ {
+ "accesses": [
+ {
+ "type": "read",
+ "isAllowed": true
+ },
+ {
+ "type": "execute",
+ "isAllowed": true
+ }
+ ],
+ "users": [
+ "bob"
+ ],
+ "groups": [],
+ "conditions": [],
+ "delegateAdmin": false
+ }
+ ],
+ "denyPolicyItems": [],
+ "allowExceptions": [],
+ "denyExceptions": [],
+ "dataMaskPolicyItems": [],
+ "rowFilterPolicyItems": [],
+ "id": 16,
+ "isEnabled": true,
+ "version": 1
+ },
+ {
+ "service": "cl1_hadoop",
+ "name": "/tmp/get-content-summary",
+ "policyType": 0,
+ "description": "",
+ "isAuditEnabled": true,
+ "resources": {
+ "path": {"values": ["/tmp/get-content-summary", "/tmp/get-content-summary/tmpdir1", "/tmp/get-content-summary/tmpdir2"], "isExcludes": false, "isRecursive": false}
+ },
+ "policyItems": [
+ {
+ "accesses": [{"type": "read","isAllowed": true}, {"type": "execute","isAllowed": true}],
+ "users": ["bob"],
+ "groups": ["IT"],
+ "conditions": [],
+ "delegateAdmin": false
+ }
+ ],
+ "denyPolicyItems": [],
+ "allowExceptions": [],
+ "denyExceptions": [],
+ "dataMaskPolicyItems": [],
+ "rowFilterPolicyItems": [],
+ "id": 40,
+ "isEnabled": true,
+ "version": 1
+ }
+ ],
+ "serviceDef": {
+ "name": "hdfs",
+ "implClass": "org.apache.ranger.services.hdfs.RangerServiceHdfs",
+ "label": "HDFS Repository",
+ "description": "HDFS Repository",
+ "options": {},
+ "configs": [
+ {
+ "itemId": 1,
+ "name": "username",
+ "type": "string",
+ "subType": "",
+ "mandatory": true,
+ "validationRegEx": "",
+ "validationMessage": "",
+ "uiHint": "",
+ "label": "Username"
+ },
+ {
+ "itemId": 2,
+ "name": "password",
+ "type": "password",
+ "subType": "",
+ "mandatory": true,
+ "validationRegEx": "",
+ "validationMessage": "",
+ "uiHint": "",
+ "label": "Password"
+ },
+ {
+ "itemId": 3,
+ "name": "fs.default.name",
+ "type": "string",
+ "subType": "",
+ "mandatory": true,
+ "validationRegEx": "",
+ "validationMessage": "",
+ "uiHint": "",
+ "label": "Namenode URL"
+ },
+ {
+ "itemId": 4,
+ "name": "hadoop.security.authorization",
+ "type": "bool",
+ "subType": "YesTrue:NoFalse",
+ "mandatory": true,
+ "defaultValue": "false",
+ "validationRegEx": "",
+ "validationMessage": "",
+ "uiHint": "",
+ "label": "Authorization Enabled"
+ },
+ {
+ "itemId": 5,
+ "name": "hadoop.security.authentication",
+ "type": "enum",
+ "subType": "authnType",
+ "mandatory": true,
+ "defaultValue": "simple",
+ "validationRegEx": "",
+ "validationMessage": "",
+ "uiHint": "",
+ "label": "Authentication Type"
+ },
+ {
+ "itemId": 6,
+ "name": "hadoop.security.auth_to_local",
+ "type": "string",
+ "subType": "",
+ "mandatory": false,
+ "validationRegEx": "",
+ "validationMessage": "",
+ "uiHint": ""
+ },
+ {
+ "itemId": 7,
+ "name": "dfs.datanode.kerberos.principal",
+ "type": "string",
+ "subType": "",
+ "mandatory": false,
+ "validationRegEx": "",
+ "validationMessage": "",
+ "uiHint": ""
+ },
+ {
+ "itemId": 8,
+ "name": "dfs.namenode.kerberos.principal",
+ "type": "string",
+ "subType": "",
+ "mandatory": false,
+ "validationRegEx": "",
+ "validationMessage": "",
+ "uiHint": ""
+ },
+ {
+ "itemId": 9,
+ "name": "dfs.secondary.namenode.kerberos.principal",
+ "type": "string",
+ "subType": "",
+ "mandatory": false,
+ "validationRegEx": "",
+ "validationMessage": "",
+ "uiHint": ""
+ },
+ {
+ "itemId": 10,
+ "name": "hadoop.rpc.protection",
+ "type": "enum",
+ "subType": "rpcProtection",
+ "mandatory": false,
+ "defaultValue": "authentication",
+ "validationRegEx": "",
+ "validationMessage": "",
+ "uiHint": "",
+ "label": "RPC Protection Type"
+ },
+ {
+ "itemId": 11,
+ "name": "commonNameForCertificate",
+ "type": "string",
+ "subType": "",
+ "mandatory": false,
+ "validationRegEx": "",
+ "validationMessage": "",
+ "uiHint": "",
+ "label": "Common Name for Certificate"
+ }
+ ],
+ "resources": [
+ {
+ "itemId": 1,
+ "name": "path",
+ "type": "path",
+ "level": 10,
+ "mandatory": true,
+ "lookupSupported": true,
+ "recursiveSupported": true,
+ "excludesSupported": false,
+ "matcher": "org.apache.ranger.plugin.resourcematcher.RangerPathResourceMatcher",
+ "matcherOptions": {
+ "wildCard": "true",
+ "ignoreCase": "false"
+ },
+ "validationRegEx": "",
+ "validationMessage": "",
+ "uiHint": "",
+ "label": "Resource Path",
+ "description": "HDFS file or directory path"
+ }
+ ],
+ "accessTypes": [
+ {
+ "itemId": 1,
+ "name": "read",
+ "label": "Read",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 2,
+ "name": "write",
+ "label": "Write",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 3,
+ "name": "execute",
+ "label": "Execute",
+ "impliedGrants": []
+ }
+ ],
+ "policyConditions": [],
+ "contextEnrichers": [],
+ "enums": [
+ {
+ "itemId": 1,
+ "name": "authnType",
+ "elements": [
+ {
+ "itemId": 1,
+ "name": "simple",
+ "label": "Simple"
+ },
+ {
+ "itemId": 2,
+ "name": "kerberos",
+ "label": "Kerberos"
+ }
+ ],
+ "defaultIndex": 0
+ },
+ {
+ "itemId": 2,
+ "name": "rpcProtection",
+ "elements": [
+ {
+ "itemId": 1,
+ "name": "authentication",
+ "label": "Authentication"
+ },
+ {
+ "itemId": 2,
+ "name": "integrity",
+ "label": "Integrity"
+ },
+ {
+ "itemId": 3,
+ "name": "privacy",
+ "label": "Privacy"
+ }
+ ],
+ "defaultIndex": 0
+ }
+ ],
+ "dataMaskDef": {
+ "maskTypes": [],
+ "accessTypes": [],
+ "resources": []
+ },
+ "rowFilterDef": {
+ "accessTypes": [],
+ "resources": []
+ },
+ "id": 1,
+ "guid": "0d047247-bafe-4cf8-8e9b-d5d377284b2d",
+ "isEnabled": true,
+ "createTime": "20170217-11:41:31.000-+0000",
+ "updateTime": "20170217-11:41:31.000-+0000",
+ "version": 1
+ },
+ "auditMode": "audit-default",
+ "tagPolicies": {
+ "serviceName": "KafkaTagService",
+ "serviceId": 5,
+ "policyVersion": 5,
+ "policyUpdateTime": "20170220-12:35:51.000-+0000",
+ "policies": [
+ {
+ "service": "KafkaTagService",
+ "name": "EXPIRES_ON",
+ "policyType": 0,
+ "description": "Policy for data with EXPIRES_ON tag",
+ "isAuditEnabled": true,
+ "resources": {
+ "tag": {
+ "values": [
+ "EXPIRES_ON"
+ ],
+ "isExcludes": false,
+ "isRecursive": false
+ }
+ },
+ "policyItems": [],
+ "denyPolicyItems": [
+ {
+ "accesses": [
+ {
+ "type": "hdfs:read",
+ "isAllowed": true
+ },
+ {
+ "type": "hdfs:write",
+ "isAllowed": true
+ },
+ {
+ "type": "hdfs:execute",
+ "isAllowed": true
+ },
+ {
+ "type": "hbase:read",
+ "isAllowed": true
+ },
+ {
+ "type": "hbase:write",
+ "isAllowed": true
+ },
+ {
+ "type": "hbase:create",
+ "isAllowed": true
+ },
+ {
+ "type": "hbase:admin",
+ "isAllowed": true
+ },
+ {
+ "type": "hive:select",
+ "isAllowed": true
+ },
+ {
+ "type": "hive:update",
+ "isAllowed": true
+ },
+ {
+ "type": "hive:create",
+ "isAllowed": true
+ },
+ {
+ "type": "hive:drop",
+ "isAllowed": true
+ },
+ {
+ "type": "hive:alter",
+ "isAllowed": true
+ },
+ {
+ "type": "hive:index",
+ "isAllowed": true
+ },
+ {
+ "type": "hive:lock",
+ "isAllowed": true
+ },
+ {
+ "type": "hive:all",
+ "isAllowed": true
+ },
+ {
+ "type": "yarn:submit-app",
+ "isAllowed": true
+ },
+ {
+ "type": "yarn:admin-queue",
+ "isAllowed": true
+ },
+ {
+ "type": "knox:allow",
+ "isAllowed": true
+ },
+ {
+ "type": "storm:submitTopology",
+ "isAllowed": true
+ },
+ {
+ "type": "storm:fileUpload",
+ "isAllowed": true
+ },
+ {
+ "type": "storm:fileDownload",
+ "isAllowed": true
+ },
+ {
+ "type": "storm:killTopology",
+ "isAllowed": true
+ },
+ {
+ "type": "storm:rebalance",
+ "isAllowed": true
+ },
+ {
+ "type": "storm:activate",
+ "isAllowed": true
+ },
+ {
+ "type": "storm:deactivate",
+ "isAllowed": true
+ },
+ {
+ "type": "storm:getTopologyConf",
+ "isAllowed": true
+ },
+ {
+ "type": "storm:getTopology",
+ "isAllowed": true
+ },
+ {
+ "type": "storm:getUserTopology",
+ "isAllowed": true
+ },
+ {
+ "type": "storm:getTopologyInfo",
+ "isAllowed": true
+ },
+ {
+ "type": "storm:uploadNewCredentials",
+ "isAllowed": true
+ },
+ {
+ "type": "kms:create",
+ "isAllowed": true
+ },
+ {
+ "type": "kms:delete",
+ "isAllowed": true
+ },
+ {
+ "type": "kms:rollover",
+ "isAllowed": true
+ },
+ {
+ "type": "kms:setkeymaterial",
+ "isAllowed": true
+ },
+ {
+ "type": "kms:get",
+ "isAllowed": true
+ },
+ {
+ "type": "kms:getkeys",
+ "isAllowed": true
+ },
+ {
+ "type": "kms:getmetadata",
+ "isAllowed": true
+ },
+ {
+ "type": "kms:generateeek",
+ "isAllowed": true
+ },
+ {
+ "type": "kms:decrypteek",
+ "isAllowed": true
+ },
+ {
+ "type": "solr:query",
+ "isAllowed": true
+ },
+ {
+ "type": "solr:update",
+ "isAllowed": true
+ },
+ {
+ "type": "solr:others",
+ "isAllowed": true
+ },
+ {
+ "type": "solr:solr_admin",
+ "isAllowed": true
+ },
+ {
+ "type": "kafka:publish",
+ "isAllowed": true
+ },
+ {
+ "type": "kafka:consume",
+ "isAllowed": true
+ },
+ {
+ "type": "kafka:configure",
+ "isAllowed": true
+ },
+ {
+ "type": "kafka:describe",
+ "isAllowed": true
+ },
+ {
+ "type": "kafka:create",
+ "isAllowed": true
+ },
+ {
+ "type": "kafka:delete",
+ "isAllowed": true
+ },
+ {
+ "type": "kafka:kafka_admin",
+ "isAllowed": true
+ },
+ {
+ "type": "atlas:read",
+ "isAllowed": true
+ },
+ {
+ "type": "atlas:create",
+ "isAllowed": true
+ },
+ {
+ "type": "atlas:update",
+ "isAllowed": true
+ },
+ {
+ "type": "atlas:delete",
+ "isAllowed": true
+ },
+ {
+ "type": "atlas:all",
+ "isAllowed": true
+ }
+ ],
+ "users": [],
+ "groups": [
+ "public"
+ ],
+ "conditions": [
+ {
+ "type": "accessed-after-expiry",
+ "values": [
+ "yes"
+ ]
+ }
+ ],
+ "delegateAdmin": false
+ }
+ ],
+ "allowExceptions": [],
+ "denyExceptions": [],
+ "dataMaskPolicyItems": [],
+ "rowFilterPolicyItems": [],
+ "id": 10,
+ "isEnabled": true,
+ "version": 1
+ },
+ {
+ "service": "KafkaTagService",
+ "name": "AtlasKafkaTagPolicy",
+ "policyType": 0,
+ "description": "",
+ "isAuditEnabled": true,
+ "resources": {
+ "tag": {
+ "values": [
+ "KafkaTag"
+ ],
+ "isExcludes": false,
+ "isRecursive": false
+ }
+ },
+ "policyItems": [
+ {
+ "accesses": [
+ {
+ "type": "kafka:consume",
+ "isAllowed": true
+ },
+ {
+ "type": "kafka:describe",
+ "isAllowed": true
+ }
+ ],
+ "users": [
+ "CN\u003dClient,O\u003dApache,L\u003dDublin,ST\u003dLeinster,C\u003dIE"
+ ],
+ "groups": [],
+ "conditions": [],
+ "delegateAdmin": false
+ }
+ ],
+ "denyPolicyItems": [],
+ "allowExceptions": [],
+ "denyExceptions": [],
+ "dataMaskPolicyItems": [],
+ "rowFilterPolicyItems": [],
+ "id": 11,
+ "isEnabled": true,
+ "version": 2
+ },
+ {
+ "service": "KafkaTagService",
+ "name": "TmpdirTagPolicy",
+ "policyType": 0,
+ "description": "",
+ "isAuditEnabled": true,
+ "resources": {
+ "tag": {
+ "values": [
+ "TmpdirTag"
+ ],
+ "isExcludes": false,
+ "isRecursive": false
+ }
+ },
+ "policyItems": [
+ {
+ "accesses": [
+ {
+ "type": "hdfs:read",
+ "isAllowed": true
+ }
+ ],
+ "users": [],
+ "groups": [
+ "IT"
+ ],
+ "conditions": [],
+ "delegateAdmin": false
+ },
+ {
+ "accesses": [
+ {
+ "type": "hdfs:read",
+ "isAllowed": true
+ }
+ ],
+ "users": [
+ "bob"
+ ],
+ "groups": [],
+ "conditions": [],
+ "delegateAdmin": false
+ }
+ ],
+ "denyPolicyItems": [],
+ "allowExceptions": [],
+ "denyExceptions": [],
+ "dataMaskPolicyItems": [],
+ "rowFilterPolicyItems": [],
+ "id": 17,
+ "isEnabled": true,
+ "version": 1
+ }
+ ],
+ "serviceDef": {
+ "name": "tag",
+ "implClass": "org.apache.ranger.services.tag.RangerServiceTag",
+ "label": "TAG",
+ "description": "TAG Service Definition",
+ "options": {
+ "ui.pages": "tag-based-policies"
+ },
+ "configs": [],
+ "resources": [
+ {
+ "itemId": 1,
+ "name": "tag",
+ "type": "string",
+ "level": 1,
+ "mandatory": true,
+ "lookupSupported": true,
+ "recursiveSupported": false,
+ "excludesSupported": false,
+ "matcher": "org.apache.ranger.plugin.resourcematcher.RangerDefaultResourceMatcher",
+ "matcherOptions": {
+ "wildCard": "false",
+ "ignoreCase": "false"
+ },
+ "validationRegEx": "",
+ "validationMessage": "",
+ "uiHint": "{ \"singleValue\":true }",
+ "label": "TAG",
+ "description": "TAG"
+ }
+ ],
+ "accessTypes": [
+ {
+ "itemId": 1002,
+ "name": "hdfs:read",
+ "label": "Read",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 1003,
+ "name": "hdfs:write",
+ "label": "Write",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 1004,
+ "name": "hdfs:execute",
+ "label": "Execute",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 2003,
+ "name": "hbase:read",
+ "label": "Read",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 2004,
+ "name": "hbase:write",
+ "label": "Write",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 2005,
+ "name": "hbase:create",
+ "label": "Create",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 2006,
+ "name": "hbase:admin",
+ "label": "Admin",
+ "impliedGrants": [
+ "hbase:read",
+ "hbase:write",
+ "hbase:create"
+ ]
+ },
+ {
+ "itemId": 3004,
+ "name": "hive:select",
+ "label": "select",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 3005,
+ "name": "hive:update",
+ "label": "update",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 3006,
+ "name": "hive:create",
+ "label": "Create",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 3007,
+ "name": "hive:drop",
+ "label": "Drop",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 3008,
+ "name": "hive:alter",
+ "label": "Alter",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 3009,
+ "name": "hive:index",
+ "label": "Index",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 3010,
+ "name": "hive:lock",
+ "label": "Lock",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 3011,
+ "name": "hive:all",
+ "label": "All",
+ "impliedGrants": [
+ "hive:select",
+ "hive:update",
+ "hive:create",
+ "hive:drop",
+ "hive:alter",
+ "hive:index",
+ "hive:lock"
+ ]
+ },
+ {
+ "itemId": 4005,
+ "name": "yarn:submit-app",
+ "label": "submit-app",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 4006,
+ "name": "yarn:admin-queue",
+ "label": "admin-queue",
+ "impliedGrants": [
+ "yarn:submit-app"
+ ]
+ },
+ {
+ "itemId": 5006,
+ "name": "knox:allow",
+ "label": "Allow",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 6007,
+ "name": "storm:submitTopology",
+ "label": "Submit Topology",
+ "impliedGrants": [
+ "storm:fileUpload",
+ "storm:fileDownload"
+ ]
+ },
+ {
+ "itemId": 6008,
+ "name": "storm:fileUpload",
+ "label": "File Upload",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 6011,
+ "name": "storm:fileDownload",
+ "label": "File Download",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 6012,
+ "name": "storm:killTopology",
+ "label": "Kill Topology",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 6013,
+ "name": "storm:rebalance",
+ "label": "Rebalance",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 6014,
+ "name": "storm:activate",
+ "label": "Activate",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 6015,
+ "name": "storm:deactivate",
+ "label": "Deactivate",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 6016,
+ "name": "storm:getTopologyConf",
+ "label": "Get Topology Conf",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 6017,
+ "name": "storm:getTopology",
+ "label": "Get Topology",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 6018,
+ "name": "storm:getUserTopology",
+ "label": "Get User Topology",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 6019,
+ "name": "storm:getTopologyInfo",
+ "label": "Get Topology Info",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 6020,
+ "name": "storm:uploadNewCredentials",
+ "label": "Upload New Credential",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 7008,
+ "name": "kms:create",
+ "label": "Create",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 7009,
+ "name": "kms:delete",
+ "label": "Delete",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 7010,
+ "name": "kms:rollover",
+ "label": "Rollover",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 7011,
+ "name": "kms:setkeymaterial",
+ "label": "Set Key Material",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 7012,
+ "name": "kms:get",
+ "label": "Get",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 7013,
+ "name": "kms:getkeys",
+ "label": "Get Keys",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 7014,
+ "name": "kms:getmetadata",
+ "label": "Get Metadata",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 7015,
+ "name": "kms:generateeek",
+ "label": "Generate EEK",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 7016,
+ "name": "kms:decrypteek",
+ "label": "Decrypt EEK",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 8108,
+ "name": "solr:query",
+ "label": "Query",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 8208,
+ "name": "solr:update",
+ "label": "Update",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 8308,
+ "name": "solr:others",
+ "label": "Others",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 8908,
+ "name": "solr:solr_admin",
+ "label": "Solr Admin",
+ "impliedGrants": [
+ "solr:query",
+ "solr:update",
+ "solr:others"
+ ]
+ },
+ {
+ "itemId": 9010,
+ "name": "kafka:publish",
+ "label": "Publish",
+ "impliedGrants": [
+ "kafka:describe"
+ ]
+ },
+ {
+ "itemId": 9011,
+ "name": "kafka:consume",
+ "label": "Consume",
+ "impliedGrants": [
+ "kafka:describe"
+ ]
+ },
+ {
+ "itemId": 9014,
+ "name": "kafka:configure",
+ "label": "Configure",
+ "impliedGrants": [
+ "kafka:describe"
+ ]
+ },
+ {
+ "itemId": 9015,
+ "name": "kafka:describe",
+ "label": "Describe",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 9017,
+ "name": "kafka:create",
+ "label": "Create",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 9018,
+ "name": "kafka:delete",
+ "label": "Delete",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 9016,
+ "name": "kafka:kafka_admin",
+ "label": "Kafka Admin",
+ "impliedGrants": [
+ "kafka:publish",
+ "kafka:consume",
+ "kafka:configure",
+ "kafka:describe",
+ "kafka:create",
+ "kafka:delete"
+ ]
+ },
+ {
+ "itemId": 11012,
+ "name": "atlas:read",
+ "label": "read",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 11013,
+ "name": "atlas:create",
+ "label": "create",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 11014,
+ "name": "atlas:update",
+ "label": "update",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 11015,
+ "name": "atlas:delete",
+ "label": "delete",
+ "impliedGrants": []
+ },
+ {
+ "itemId": 11016,
+ "name": "atlas:all",
+ "label": "All",
+ "impliedGrants": [
+ "atlas:read",
+ "atlas:create",
+ "atlas:update",
+ "atlas:delete"
+ ]
+ }
+ ],
+ "policyConditions": [
+ {
+ "itemId": 1,
+ "name": "accessed-after-expiry",
+ "evaluator": "org.apache.ranger.plugin.conditionevaluator.RangerScriptTemplateConditionEvaluator",
+ "evaluatorOptions": {
+ "scriptTemplate": "ctx.isAccessedAfter(\u0027expiry_date\u0027);"
+ },
+ "uiHint": "{ \"singleValue\":true }",
+ "label": "Accessed after expiry_date (yes/no)?",
+ "description": "Accessed after expiry_date? (yes/no)"
+ }
+ ],
+ "contextEnrichers": [
+ {
+ "itemId": 1,
+ "name": "TagEnricher",
+ "enricher": "org.apache.ranger.plugin.contextenricher.RangerTagEnricher",
+ "enricherOptions": {
+ "tagRetrieverClassName": "org.apache.ranger.plugin.contextenricher.RangerAdminTagRetriever",
+ "tagRefresherPollingInterval": "60000"
+ }
+ }
+ ],
+ "enums": [],
+ "dataMaskDef": {
+ "maskTypes": [],
+ "accessTypes": [],
+ "resources": []
+ },
+ "rowFilterDef": {
+ "accessTypes": [],
+ "resources": []
+ },
+ "id": 100,
+ "guid": "0d047248-baff-4cf9-8e9e-d5d377284b2e",
+ "isEnabled": true,
+ "createTime": "20170217-11:41:33.000-+0000",
+ "updateTime": "20170217-11:41:35.000-+0000",
+ "version": 11
+ },
+ "auditMode": "audit-default"
+ }
+}
\ No newline at end of file
diff --git a/sdk/java/src/test/resources/ranger-hdfs-audit.xml b/sdk/java/src/test/resources/ranger-hdfs-audit.xml
new file mode 100644
index 000000000000..24856dcb7f76
--- /dev/null
+++ b/sdk/java/src/test/resources/ranger-hdfs-audit.xml
@@ -0,0 +1,23 @@
+
+
+
+
+
+ xasecure.audit.is.enabled
+ false
+
+
\ No newline at end of file
diff --git a/sdk/java/src/test/resources/ranger-hdfs-security.xml b/sdk/java/src/test/resources/ranger-hdfs-security.xml
new file mode 100644
index 000000000000..02d8f013071b
--- /dev/null
+++ b/sdk/java/src/test/resources/ranger-hdfs-security.xml
@@ -0,0 +1,83 @@
+
+
+
+
+
+ ranger.plugin.hdfs.service.name
+ xxx
+
+ Name of the Ranger service containing policies for this YARN instance
+
+
+
+
+ ranger.plugin.hdfs.policy.source.impl
+ io.juicefs.permission.RangerAdminClientImpl
+
+ Policy source.
+
+
+
+
+ ranger.plugin.hdfs.policy.cache.dir
+ ${project.build.directory}
+
+ Directory where Ranger policies are cached after successful retrieval from the source
+
+
+
+
+ ranger.plugin.hdfs.policy.pollIntervalMs
+ 30000
+
+ How often to poll for changes in policies?
+
+
+
+
+ ranger.plugin.hdfs.policy.cache.dir
+ xxx
+
+ Directory where Ranger policies are cached after successful retrieval from the source
+
+
+
+
+ ranger.plugin.hdfs.policy.rest.client.connection.timeoutMs
+ 120000
+
+ Hdfs Plugin RangerRestClient Connection Timeout in Milli Seconds
+
+
+
+
+ ranger.plugin.hdfs.policy.rest.client.read.timeoutMs
+ 30000
+
+ Hdfs Plugin RangerRestClient read Timeout in Milli Seconds
+
+
+
+
+ xasecure.add-hadoop-authorization
+ true
+
+ Enable/Disable the default hadoop authorization (based on
+ rwxrwxrwx permission on the resource) if Ranger Authorization fails.
+
+
+
\ No newline at end of file