Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

求助:打包运行报错-buildJobGraph #10

Open
tiloulou177 opened this issue Jan 6, 2021 · 2 comments
Open

求助:打包运行报错-buildJobGraph #10

tiloulou177 opened this issue Jan 6, 2021 · 2 comments

Comments

@tiloulou177
Copy link

tiloulou177 commented Jan 6, 2021

hey,todd5176:
我现在的runJar和hadoop,flink都在同一台机器A上,--bootstrapServer配置的是3台kafka机器B,C,D,然后在A上运行java -cp xxx.jar mainClass会报错(报错信息如下),能请教一下如何解决么?

......
INFO [main] - Loading configuration property: jobmanager.execution.failover-strategy, region
INFO [main] - Loading configuration property: rest.bind-port, 50100-50200
INFO [main] - Loading configuration property: io.tmp.dirs, /tmp
INFO [main] - Loading configuration property: historyserver.web.address, dataMiddle-93
INFO [main] - Loading configuration property: historyserver.archive.fs.dir, hdfs://master/flink/ha/completed-jobs/
INFO [main] - Loading configuration property: resourcemanager.taskmanager-timeout, 900000
Exception in thread "main" org.apache.flink.client.program.ProgramInvocationException: The program caused an error:
Classpath: [file:/opt/runJar/flink-job-submit-test0105.jar]
System.out: (none)
System.err: (none)
at org.apache.flink.client.program.PackagedProgramUtils.generateException(PackagedProgramUtils.java:245)
at org.apache.flink.client.program.PackagedProgramUtils.getPipelineFromProgram(PackagedProgramUtils.java:164)
at org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:77)
at org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:109)
at com.yss.datamiddle.flinkYarnSubmiter.utils.JobGraphBuildUtil.buildJobGraph(JobGraphBuildUtil.java:75)
at com.yss.datamiddle.flinkYarnSubmiter.executor.YarnJobClusterExecutor.submit(YarnJobClusterExecutor.java:73)
at com.yss.datamiddle.flinkYarnSubmiter.launcher.LauncherMain.submitFlinkJob(LauncherMain.java:55)
at com.yss.datamiddle.flinkYarnSubmiter.launcher.LauncherMain.main(LauncherMain.java:188)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:288)
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
at org.apache.flink.client.program.PackagedProgramUtils.getPipelineFromProgram(PackagedProgramUtils.java:150)
at org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:77)
at org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:109)
at com.yss.datamiddle.flinkYarnSubmiter.utils.JobGraphBuildUtil.buildJobGraph(JobGraphBuildUtil.java:75)
at com.yss.datamiddle.flinkYarnSubmiter.executor.YarnJobClusterExecutor.submit(YarnJobClusterExecutor.java:73)
at com.yss.datamiddle.flinkYarnSubmiter.launcher.LauncherMain.submitFlinkJob(LauncherMain.java:55)
at com.yss.datamiddle.flinkYarnSubmiter.launcher.LauncherMain.main(LauncherMain.java:188)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:288)
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
at org.apache.flink.client.program.PackagedProgramUtils.getPipelineFromProgram(PackagedProgramUtils.java:150)
at org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:77)
at org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:109)
at com.yss.datamiddle.flinkYarnSubmiter.utils.JobGraphBuildUtil.buildJobGraph(JobGraphBuildUtil.java:75)
at com.yss.datamiddle.flinkYarnSubmiter.executor.YarnJobClusterExecutor.submit(YarnJobClusterExecutor.java:73)
at com.yss.datamiddle.flinkYarnSubmiter.launcher.LauncherMain.submitFlinkJob(LauncherMain.java:55)
at com.yss.datamiddle.flinkYarnSubmiter.launcher.LauncherMain.main(LauncherMain.java:188)
......
@tiloulou177
Copy link
Author

补充:
buildJobParamsInfo()中配置的参数如下:

public static JobParamsInfo buildJobParamsInfo() {
        // 可执行jar包路径
        String runJarPath = "/opt/runJar/flink-job-submit-test0106.jar";
        // 任务参数
        String[] execArgs = new String[]{"-jobName", "JavaStreamingKafkaSource0106", "--topic", "topic-calculate-java-sysRiskAndRAR-test", "--bootstrapServers", "192.168.100.105:9092,192.168.100.106:9092,192.168.100.107:9092"};//
        // 任务名称
        String jobName = "Flink perjob submit";
        // flink 文件夹路径
        String flinkConfDir = "/opt/flink-1.11.1/conf";
        // flink lib包路径
        String flinkJarPath = "/opt/flink-1.11.1/lib";
        //  yarn 文件夹路径
        String yarnConfDir = "/opt/hadoop";
        // perjob 运行流任务
        String runMode = "yarn_perjob";
        //  作业依赖的外部文件
        String[] dependFile = new String[]{"/opt/flink-1.11.1/README.txt"};
        // 可执行jar包路径
//        String runJarPath = "F:\\projects\\dataService-calculate-code-java\\target\\flink-job-submit-test0105.jar";
//        // 任务参数
//        String[] execArgs = new String[]{"-jobName", "JavaStreamingKafkaSource1116", "--topic", "topic-calculate-java-sysRiskAndRAR-test", "--bootstrapServers", "localhost:8080"};
//        // 任务名称
//        String jobName = "Flink perjob submit";
//        // flink 文件夹路径
//        String flinkConfDir = "D:\\flink-1.11.1\\conf";
//        // flink lib包路径
//        String flinkJarPath = "D:\\flink-1.11.1\\lib";
//        //  yarn 文件夹路径
//        String yarnConfDir = "D:\\hadoop-2.10.1";
//        // perjob 运行流任务
//        String runMode = "yarn_perjob";
//        //  作业依赖的外部文件
//        String[] dependFile = new String[]{"D:\\flink-1.11.1\\README.txt"};

        // 任务提交队列
        String queue = "default";
        // yarnsession appid配置
        Properties yarnSessionConfProperties = new Properties();
        yarnSessionConfProperties.setProperty("yid", "application_1594265598097_5425");

        // 非必要参数,可以通过shade打包指定mainClass, flink自动获取
         String entryPointClassName = "com.yss.datamiddle.flinkYarnSubmiter.launcher.LauncherMain";
//        String entryPointClassName = null;

        // savepoint 及并行度相关
        Properties confProperties = new Properties();
        confProperties.setProperty("parallelism", "1");


        return JobParamsInfo.builder()
                .setExecArgs(execArgs)
                .setName(jobName)
                .setRunJarPath(runJarPath)
                .setDependFile(dependFile)
                .setFlinkConfDir(flinkConfDir)
                .setYarnConfDir(yarnConfDir)
                .setConfProperties(confProperties)
                .setYarnSessionConfProperties(yarnSessionConfProperties)
                .setFlinkJarPath(flinkJarPath)
                .setQueue(queue)
                .setRunMode(runMode)
                .setEntryPointClassName(entryPointClassName)
                .build();
    }

@weilanying
Copy link

请问解决了吗,遇到了相同的问题

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants