Failure Origin: replication, Message: Something went wrong during replication

  • Is this your first time deploying Airbyte?: Yes
  • OS Version / Instance: Ubuntu 2204
  • Memory / Disk: you can use something like 16 Gb / 160 GB
  • Deployment: docker
  • Airbyte Version: 0.40.26
  • Source name/version: posthog/0.1.8
  • Destination name/version: BIgQuery/ 1.2.9
  • Step: Issue is happening when trying to sync data between Poshog and BigQuery. Sync seems to be running/reading database but fails to move on.
  • Error from error logs:
2023-01-06 16:35:49 WARN i.t.i.w.ActivityWorker$TaskHandlerImpl(logExceptionDuringResultReporting):365 - Failure during reporting of activity result to the server. ActivityId = 8f06f3df-62b5-334b-92fe-8ebf6008f402, ActivityType = Replicate, WorkflowId=sync_6, WorkflowType=SyncWorkflow, RunId=3b79d5ab-382a-4e42-8f1f-00fb4fd8700e

io.grpc.StatusRuntimeException: NOT_FOUND: workflow execution already completed

at io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:271) ~[grpc-stub-1.50.2.jar:1.50.2]

at io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:252) ~[grpc-stub-1.50.2.jar:1.50.2]

at io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:165) ~[grpc-stub-1.50.2.jar:1.50.2]

at io.temporal.api.workflowservice.v1.WorkflowServiceGrpc$WorkflowServiceBlockingStub.respondActivityTaskFailed(WorkflowServiceGrpc.java:3866) ~[temporal-serviceclient-1.17.0.jar:?]

at io.temporal.internal.worker.ActivityWorker$TaskHandlerImpl.lambda$sendReply$1(ActivityWorker.java:320) ~[temporal-sdk-1.17.0.jar:?]

at io.temporal.internal.retryer.GrpcRetryer.lambda$retry$0(GrpcRetryer.java:52) ~[temporal-serviceclient-1.17.0.jar:?]

at io.temporal.internal.retryer.GrpcSyncRetryer.retry(GrpcSyncRetryer.java:67) ~[temporal-serviceclient-1.17.0.jar:?]

at io.temporal.internal.retryer.GrpcRetryer.retryWithResult(GrpcRetryer.java:60) ~[temporal-serviceclient-1.17.0.jar:?]

at io.temporal.internal.retryer.GrpcRetryer.retry(GrpcRetryer.java:50) ~[temporal-serviceclient-1.17.0.jar:?]

at io.temporal.internal.worker.ActivityWorker$TaskHandlerImpl.sendReply(ActivityWorker.java:315) ~[temporal-sdk-1.17.0.jar:?]

at io.temporal.internal.worker.ActivityWorker$TaskHandlerImpl.handleActivity(ActivityWorker.java:252) ~[temporal-sdk-1.17.0.jar:?]

at io.temporal.internal.worker.ActivityWorker$TaskHandlerImpl.handle(ActivityWorker.java:206) ~[temporal-sdk-1.17.0.jar:?]

at io.temporal.internal.worker.ActivityWorker$TaskHandlerImpl.handle(ActivityWorker.java:179) ~[temporal-sdk-1.17.0.jar:?]

at io.temporal.internal.worker.PollTaskExecutor.lambda$process$0(PollTaskExecutor.java:93) ~[temporal-sdk-1.17.0.jar:?]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]

at java.lang.Thread.run(Thread.java:1589) ~[?:?]

Hello there! You are receiving this message because none of your fellow community members has stepped in to respond to your topic post. (If you are a community member and you are reading this response, feel free to jump in if you have the answer!) As a result, the Community Assistance Team has been made aware of this topic and will be investigating and responding as quickly as possible.
Some important considerations that will help your to get your issue solved faster:

  • It is best to use our topic creation template; if you haven’t yet, we recommend posting a followup with the requested information. With that information the team will be able to more quickly search for similar issues with connectors and the platform and troubleshoot more quickly your specific question or problem.
  • Make sure to upload the complete log file; a common investigation roadblock is that sometimes the error for the issue happens well before the problem is surfaced to the user, and so having the tail of the log is less useful than having the whole log to scan through.
  • Be as descriptive and specific as possible; when investigating it is extremely valuable to know what steps were taken to encounter the issue, what version of connector / platform / Java / Python / docker / k8s was used, etc. The more context supplied, the quicker the investigation can start on your topic and the faster we can drive towards an answer.
  • We in the Community Assistance Team are glad you’ve made yourself part of our community, and we’ll do our best to answer your questions and resolve the problems as quickly as possible. Expect to hear from a specific team member as soon as possible.

Thank you for your time and attention.
Best,
The Community Assistance Team

Hi @Vanesa, could you please include your full logs? The current info is not enough to debug the issue unfortunately!

HI, yes, sure, I’m sending logs from January 6th that I’ve mentioned in the thread.

We have also tried syncing only persons table and not with INSERT but bucket method. This synced failed aswell, so I’m attaching those logs too.

d73dc327_70be_4623_b7b6_9f4b466dd679_logs_6.txt (330.7 KB)
d73dc327_70be_4623_b7b6_9f4b466dd679_logs_9.txt (782.3 KB)

Thanks for the logs! I can’t pinpoint exactly what’s causing the error yet, but I’ve seen this happen before if the data is malformed/corrupted.

Another thought is that perhaps the node is running out of resources and cancelling the sync. Do you have a lot of connectors running?

@Vanesa @natalyjazzviolin I am facing the same issue, sync worker failed. We have also increased our disc space to avoid this memory issue, but it’s giving us the same error as before.
We have tried for only one record but still its not loading any data.
Full log as below,

2023-03-03 11:02:33 - Additional Failure Information: java.lang.RuntimeException: No properties node in stream schema
Attempt 1
Attempt 2
Attempt 3
0 Bytesno recordsno records2s
Failure Origin: replication, Message: Something went wrong during replication
/tmp/workspace/35/2/logs.log


2023-03-03 11:02:34 destination > 2023-03-03 11:02:34 INFO i.a.i.b.IntegrationRunner(runInternal):125 - Integration config: IntegrationConfig{command=WRITE, configPath='destination_config.json', catalogPath='destination_catalog.json', statePath='null'}
2023-03-03 11:02:34 destination > 2023-03-03 11:02:34 WARN c.n.s.JsonMetaSchema(newValidator):338 - Unknown keyword examples - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword
2023-03-03 11:02:34 destination > 2023-03-03 11:02:34 WARN c.n.s.JsonMetaSchema(newValidator):338 - Unknown keyword example - you should define your own Meta Schema. If the keyword is irrelevant for validation, just use a NonValidationKeyword
2023-03-03 11:02:34 destination > 2023-03-03 11:02:34 INFO i.a.i.b.IntegrationRunner(runInternal):171 - Completed integration: io.airbyte.integrations.destination.e2e_test.TestingDestinations
2023-03-03 11:02:34 destination > 2023-03-03 11:02:34 INFO i.a.i.d.e.TestingDestinations(main):73 - completed destination: class io.airbyte.integrations.destination.e2e_test.TestingDestinations
2023-03-03 11:02:34 ERROR i.a.w.g.DefaultReplicationWorker(replicate):259 - Sync worker failed.
java.util.concurrent.ExecutionException: java.lang.RuntimeException: No properties node in stream schema
	at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:396) ~[?:?]
	at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2073) ~[?:?]
	at io.airbyte.workers.general.DefaultReplicationWorker.replicate(DefaultReplicationWorker.java:251) ~[io.airbyte-airbyte-commons-worker-0.41.0.jar:?]
	at io.airbyte.workers.general.DefaultReplicationWorker.run(DefaultReplicationWorker.java:175) ~[io.airbyte-airbyte-commons-worker-0.41.0.jar:?]
	at io.airbyte.workers.general.DefaultReplicationWorker.run(DefaultReplicationWorker.java:91) ~[io.airbyte-airbyte-commons-worker-0.41.0.jar:?]
	at io.airbyte.workers.temporal.TemporalAttemptExecution.lambda$getWorkerThread$5(TemporalAttemptExecution.java:195) ~[io.airbyte-airbyte-workers-0.41.0.jar:?]
	at java.lang.Thread.run(Thread.java:1589) ~[?:?]
Caused by: java.lang.RuntimeException: No properties node in stream schema
	at io.airbyte.workers.general.DefaultReplicationWorker.populateStreamToAllFields(DefaultReplicationWorker.java:696) ~[io.airbyte-airbyte-commons-worker-0.41.0.jar:?]
	at io.airbyte.workers.general.DefaultReplicationWorker.lambda$

Let us know the exact error