User Tools

Site Tools


bloglike:2021-06

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Last revisionBoth sides next revision
bloglike:2021-06 [2021/06/04 10:07] – forgot to sign it :) styblabloglike:2021-06 [2021/06/25 16:02] – It's a trap and its name is FIFO queue stybla
Line 11: Line 11:
  
  --- //[[stybla@turnovfree.net|Zdenek Styblik]] 2021/06/04 15:06//  --- //[[stybla@turnovfree.net|Zdenek Styblik]] 2021/06/04 15:06//
 +
 +
 +===== When AWS CodeBuild and CloudWatch Logs integration doesn't work =====
 +
 +Previously, I was praising AWS CodeBuild and CloudWatch Logs integration. Today, it's something else :) I was in need of quick-and-dirty CodeBuild project in order to try something. I've made couple mistakes along the way, but I still think and it's possible there are cases when logs won't appear in CloudWatch and when you still need to offload logs to S3 bucket.
 +
 +<code>
 +[Container] 2021/06/14 15:05:28 Waiting for agent ping
 +[Container] 2021/06/14 15:05:31 Waiting for DOWNLOAD_SOURCE
 +[Container] 2021/06/14 15:05:32 Phase is DOWNLOAD_SOURCE
 +[Container] 2021/06/14 15:05:32 CODEBUILD_SRC_DIR=/codebuild/output/src123899037/src
 +[Container] 2021/06/14 15:05:32 YAML location is /codebuild/readonly/buildspec.yml
 +[Container] 2021/06/14 15:05:32 Phase complete: DOWNLOAD_SOURCE State: FAILED
 +[Container] 2021/06/14 15:05:32 Phase context status code: YAML_FILE_ERROR Message: wrong number of container tags, expected 1
 +</code>
 +
 +
 +btw there was no ''version: 0.2'' in buildspec file.
 +
 +<code>
 +[Container] 2021/06/14 15:46:59 Waiting for agent ping
 +[Container] 2021/06/14 15:47:02 Waiting for DOWNLOAD_SOURCE
 +mounting '127.0.0.1:/' failed. connection reset by peer
 +</code>
 +
 +I had to setup S3 bucket(so much for quick) in order to get to the bottom of what exactly is wrong.
 +
 + --- //[[stybla@turnovfree.net|Zdenek Styblik]] 2021/06/17 09:47//
 +
 +
 +===== It's a trap! and its name is FIFO queue =====
 +
 +When migrating from Redis-as-a-queue to AWS SQS and making application more integrated with the cloud, we've decided to use a FIFO queue. Intentions were good - ideally keep order of messages and prevent duplicates as low as possible. We were aware of the following excerpt from FAQ:
 +
 +> By design, Amazon SQS FIFO queues don't serve messages from the same message group to more than one consumer at a time. However, if your FIFO queue has multiple message groups, you can take advantage of parallel consumers
 +
 +but not so much of implications, at least for django-q. We've given it a try and it failed miserably due to combo of max attempts and buggy job running way past the limit.
 +
 +As for the implications(which I haven't verified), FIFO means that only one job can be executed at the time and nothing else. Got django-q workers/threads? Pointless. Scaling? Yes, if you emit messages into different message groups and willing to accept the fact that some messages might not be processed at all on scale down. I'm fairly sure both of these can be solved, but is it worth it? Most likely no.
 +
 +Therefore, we've switched to the simple SQS queue with no content deduplication(which probably didn't work anyway with this setup) and no FIFO and possibility of duplicates. However, that's life and it works.
 +
 + --- //[[stybla@turnovfree.net|Zdenek Styblik]] 2021/06/25 20:56//
bloglike/2021-06.txt · Last modified: 2021/06/29 03:51 by stybla