已创建访问密钥并开通LAS服务
说明:根据需求选择lf-client-2或者lf-client-3进行接入
兼容开源hive2协议
添加依赖
<dependency> <groupId>org.apache.hive</groupId> <artifactId>lf-client-2</artifactId> <version>1.0.1-RELEASE</version> </dependency>
请求示例
String ak = {ak}; String sk = {sk}; HiveConf conf = new HiveConf(); conf.set(HiveConf.ConfVars.METASTOREURIS.varname, {endPoint}); HiveMetaStoreClient hmsClient = new HiveMetaStoreClient(conf); hmsClient.setRegion("cn-beijing"); hmsClient.setAccessKeyIdAndSecretAccessKey(ak, sk); List<String> allDatabases = hmsClient.getAllDatabases(); System.out.println(allDatabases); Table table = hmsClient.getTable("db_test","tbl_test"); System.out.println(table);
兼容开源hive3协议
添加依赖
<dependency> <groupId>org.apache.hive</groupId> <artifactId>lf-client-3</artifactId> <version>1.0.1-RELEASE</version> </dependency>
请求示例
public static void main(String[] args) throws Exception { String ak = {ak}; String sk = {sk}; Configuration conf = new Configuration(); conf.set(MetastoreConf.ConfVars.THRIFT_URIS.getVarname(), {endPoint}); HiveMetaStoreClient hmsClient = new HiveMetaStoreClient(conf); hmsClient.setRegion("cn-beijing"); hmsClient.setAccessKeyIdAndSecretAccessKey(ak, sk); Database database = hmsClient.getDatabase("xyu_test"); Table table = hmsClient.getTable("db_test","tbl_test"); System.out.println(database); System.out.println(table); }
hive2.x sdk
lf-client-2 | hive-exec | 发布日期 | 发布内容 |
---|---|---|---|
1.0.1-RELEASE | 1.0.0-LF2-RELEASE | 20240521 | |
1.0.0-RELEASE | 1.0.0-LF2-RELEASE | 20240311 |
hive3.x sdk
lf-client-3 | hive-exec | 发布日期 | 发布内容 |
---|---|---|---|
1.0.1-RELEASE | 1.0.0-LF3-RELEASE | 20240521 | |
1.0.0-RELEASE | 1.0.0-LF3-RELEASE | 20240311 |
region | endpoint | 备注 |
---|---|---|
cn-beijing | thrift://lakeformation.las.cn-beijing.ivolces.com:48869 | 仅支持火山内部访问,不支持公网访问 |
cn-shanghai | thrift://lakeformation.las.cn-shanghai.ivolces.com:48869 | |
cn-guangzhou | thrift://lakeformation.las.cn-guangzhou.ivolces.com:48869 |
已创建访问密钥并开通LAS服务
添加依赖
<dependency> <groupId>bytedance.olap</groupId> <artifactId>gemini-client-shaded</artifactId> <version>1.0.0.3-RELEASE</version> </dependency>
请求示例
public static void main(String[] args) throws Exception { String ak = {ak}; String sk = {sk}; String endPoint = {endPoint}; GeminiClientIface signingClient = GeminiClientFactory.createSigningClient(endpoint, ak, sk, null, null, null); List<GeminiPrivilege> privileges = new ArrayList<>(); GeminiPrivilege privilege = new GeminiPrivilege(); GeminiResource geminiResource = new GeminiResource(); geminiResource.setResourceScope(GeminiResourceScope.TABLE); geminiResource.setRegion("cn-beijing"); geminiResource.setTenant("2100000001"); geminiResource.setSchemaName("@hive#db_test"); geminiResource.setTableName("tb_test"); privilege.setResource(geminiResource); privilege.setAction(GeminiPrivilegeAction.DESCRIBE); privileges.add(privilege); GeminiPrincipal principal = new GeminiPrincipal(); principal.setTenant("2100000001"); principal.setPrincipalType(GeminiPrincipalType.ACCOUNT); principal.setPrincipalName("1008632"); signingClient.checkPrivilegesForPolicy(principal,privileges,null,null); }
发布版本 | 发布日期 | 发布内容 |
---|---|---|
1.0.0.18-SNAPSHOT | 20240417 | 增加revoke_privileges_for_dropped_resources方法 |
1.0.0.3-RELEASE | 20240514 | 去除log相关依赖,对部分第三方依赖做 relocate;CatalogName默认设置成hive,修复grant和revoke没有带catalogName的问题,以及优化gemini-client的打包方式 |
region | endpoint | 备注 |
---|---|---|
cn-beijing | thrift://100.96.5.173:48869 | 仅支持火山内部访问,不支持公网访问,需要开白 |
cn-shanghai | thrift://100.96.4.175:48869 | 仅支持火山内部访问,不支持公网访问,需要开白 |
cn-guangzhou | thrift://100.96.4.70:48869 | 仅支持火山内部访问,不支持公网访问,需要开白 |