Minio to Iceberg connector issue with java.lang.NoSuchMethodError

Summary

The Minio to Iceberg connector is failing with a java.lang.NoSuchMethodError related to org.apache.logging.slf4j.Log4jLoggerFactory. The logs show failures in the destination, source, and replication processes, leading to the job failing after multiple retries.


Question

Minio to Iceberg seems broken as it only created folder with temp file as . i am attaching the logs

logs--------------------------------------------------------

  • 2024-05-29 18:58:21 platform > failures: [ {
    215
    “failureOrigin” : “destination”,
    216
    “failureType” : “system_error”,
    217
    “internalMessage” : “java.lang.NoSuchMethodError: org.apache.logging.slf4j.Log4jLoggerFactory: method ‘void <init>()’ not found”,
    218
    “externalMessage” : “Something went wrong in the connector. See the logs for more details.”,
    219
    “metadata” : {
    220
    “attemptNumber” : 4,
    221
    “jobId” : 52,
    222
    “from_trace_message” : true,
    223
    “connector_command” : “write”
    224
    },
    225
    “stacktrace” : “java.lang.NoSuchMethodError: org.apache.logging.slf4j.Log4jLoggerFactory: method ‘void <init>()’ not found\n\tat org.slf4j.impl.StaticLoggerBinder.<init>(StaticLoggerBinder.java:53)\n\tat org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder.java:41)\n\tat org.apache.spark.internal.Logging$.org$apache$spark$internal$Logging$$isLog4j2(Logging.scala:232)\n\tat org.apache.spark.internal.Logging.initializeLogging(Logging.scala:129)\n\tat org.apache.spark.internal.Logging.initializeLogIfNecessary(Logging.scala:115)\n\tat org.apache.spark.internal.Logging.initializeLogIfNecessary$(Logging.scala:109)\n\tat org.apache.spark.SparkContext.initializeLogIfNecessary(SparkContext.scala:84)\n\tat org.apache.spark.internal.Logging.initializeLogIfNecessary(Logging.scala:106)\n\tat org.apache.spark.internal.Logging.initializeLogIfNecessary$(Logging.scala:105)\n\tat org.apache.spark.SparkContext.initializeLogIfNecessary(SparkContext.scala:84)\n\tat org.apache.spark.internal.Logging.log(Logging.scala:53)\n\tat org.apache.spark.internal.Logging.log$(Logging.scala:51)\n\tat org.apache.spark.SparkContext.log(SparkContext.scala:84)\n\tat org.apache.spark.internal.Logging.logInfo(Logging.scala:61)\n\tat org.apache.spark.internal.Logging.logInfo$(Logging.scala:60)\n\tat org.apache.spark.SparkContext.logInfo(SparkContext.scala:84)\n\tat org.apache.spark.SparkContext.<init>(SparkContext.scala:195)\n\tat org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2714)\n\tat org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:953)\n\tat scala.Option.getOrElse(Option.scala:201)\n\tat org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:947)\n\tat io.airbyte.integrations.destination.iceberg.IcebergDestination.getConsumer(IcebergDestination.java:85)\n\tat io.airbyte.cdk.integrations.base.Destination.getSerializedMessageConsumer(Destination.java:54)\n\tat io.airbyte.cdk.integrations.base.IntegrationRunner.runInternal(IntegrationRunner.java:186)\n\tat io.airbyte.cdk.integrations.base.IntegrationRunner.run(IntegrationRunner.java:125)\n\tat io.airbyte.integrations.destination.iceberg.IcebergDestination.main(IcebergDestination.java:42)\n”,
    226
    “timestamp” : 1717009093941
    227
    }, {
    228
    “failureOrigin” : “destination”,
    229
    “internalMessage” : “Destination process exited with non-zero exit code 1”,
    230
    “externalMessage” : “Something went wrong within the destination connector”,
    231
    “metadata” : {
    232
    “attemptNumber” : 4,
    233
    “jobId” : 52,
    234
    “connector_command” : “write”
    235
    },
    236
    “stacktrace” : “io.airbyte.workers.internal.exception.DestinationException: Destination process exited with non-zero exit code 1\n\tat io.airbyte.workers.general.BufferedReplicationWorker.readFromDestination(BufferedReplicationWorker.java:472)\n\tat io.airbyte.workers.general.BufferedReplicationWorker.lambda$runAsync$2(BufferedReplicationWorker.java:227)\n\tat java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1804)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)\n\tat java.base/java.lang.Thread.run(Thread.java:1583)\n”,
    237
    “timestamp” : 1717009094197
    238
    }, {
    239
    “failureOrigin” : “source”,
    240
    “internalMessage” : “Source process read attempt failed”,
    241
    “externalMessage” : “Something went wrong within the source connector”,
    242
    “metadata” : {
    243
    “attemptNumber” : 4,
    244
    “jobId” : 52,
    245
    “connector_command” : “read”
    246
    },
    247
    “stacktrace” : “io.airbyte.workers.internal.exception.SourceException: Source process read attempt failed\n\tat io.airbyte.workers.general.BufferedReplicationWorker.readFromSource(BufferedReplicationWorker.java:373)\n\tat io.airbyte.workers.general.BufferedReplicationWorker.lambda$runAsyncWithHeartbeatCheck$3(BufferedReplicationWorker.java:234)\n\tat java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1804)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)\n\tat java.base/java.lang.Thread.run(Thread.java:1583)\nCaused by: java.lang.IllegalStateException: Source process is still alive, cannot retrieve exit value.\n\tat com.google.common.base.Preconditions.checkState(Preconditions.java:512)\n\tat io.airbyte.workers.internal.DefaultAirbyteSource.getExitValue(DefaultAirbyteSource.java:140)\n\tat io.airbyte.workers.general.BufferedReplicationWorker.readFromSource(BufferedReplicationWorker.java:359)\n\t… 5 more\n”,
    248
    “timestamp” : 1717009100750
    249
    }, {
    250
    “failureOrigin” : “replication”,
    251
    “internalMessage” : “io.airbyte.workers.exception.WorkerException: Destination process exit with code 1. This warning is normal if the job was cancelled.”,
    252
    “externalMessage” : “Something went wrong during replication”,
    253
    “metadata” : {
    254
    “attemptNumber” : 4,
    255
    “jobId” : 52
    256
    },
    257
    “stacktrace” : “java.lang.RuntimeException: io.airbyte.workers.exception.WorkerException: Destination process exit with code 1. This warning is normal if the job was cancelled.\n\tat io.airbyte.workers.general.BufferedReplicationWorker$CloseableWithTimeout.lambda$close$0(BufferedReplicationWorker.java:517)\n\tat io.airbyte.workers.general.BufferedReplicationWorker.lambda$runAsyncWithTimeout$5(BufferedReplicationWorker.java:255)\n\tat java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1804)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)\n\tat java.base/java.lang.Thread.run(Thread.java:1583)\nCaused by: io.airbyte.workers.exception.WorkerException: Destination process exit with code 1. This warning is normal if the job was cancelled.\n\tat io.airbyte.workers.internal.DefaultAirbyteDestination.close(DefaultAirbyteDestination.java:187)\n\tat io.airbyte.workers.general.BufferedReplicationWorker$CloseableWithTimeout.lambda$close$0(BufferedReplicationWorker.java:515)\n\t… 5 more\n”,
    258
    “timestamp” : 1717009100751
    259
    } ]
    260
    2024-05-29 18:58:21 platform >
    261
    2024-05-29 18:58:21 platform > ----- END REPLICATION -----
    262
    2024-05-29 18:58:21 platform >
    263
    2024-05-29 18:58:21 platform > Retry State: RetryManager(completeFailureBackoffPolicy=BackoffPolicy(minInterval=PT10S, maxInterval=PT30M, base=3), partialFailureBackoffPolicy=null, successiveCompleteFailureLimit=5, totalCompleteFailureLimit=10, successivePartialFailureLimit=1000, totalPartialFailureLimit=10, successiveCompleteFailures=5, totalCompleteFailures=5, successivePartialFailures=0, totalPartialFailures=0)
    264
    Backoff before next attempt: 13 minutes 30 seconds
    265
    2024-05-29 18:58:21 platform > Failing job: 52, reason: Job failed after too many retries for connection 79ca74be-46d6-4f0e-a241-18b31dab78e1


This topic has been created from a Slack thread to give it more visibility.
It will be on Read-Only mode here. Click here if you want
to access the original thread.

Join the conversation on Slack

["minio-to-iceberg", "connector-issue", "java.lang.NoSuchMethodError", "org.apache.logging.slf4j.Log4jLoggerFactory", "destination-failure", "source-failure", "replication-failure", "job-failed"]