User Tools

Site Tools


bloglike:2021-06

Issue 2021 - June

AWS CodeBuild and CloudWatch Logs integration

I couldn't find any pictures(examples) of how AWS CodeBuild and CloudWatch logs integration looks like, therefore I've snapped some myself.

It looks alright and usable, especially logs available in build status. Better than fishing logs from S3.

Zdenek Styblik 2021/06/04 15:06

When AWS CodeBuild and CloudWatch Logs integration doesn't work

Previously, I was praising AWS CodeBuild and CloudWatch Logs integration. Today, it's something else :) I was in need of quick-and-dirty CodeBuild project in order to try something. I've made couple mistakes along the way, but I still think and it's possible there are cases when logs won't appear in CloudWatch and when you still need to offload logs to S3 bucket.

[Container] 2021/06/14 15:05:28 Waiting for agent ping
[Container] 2021/06/14 15:05:31 Waiting for DOWNLOAD_SOURCE
[Container] 2021/06/14 15:05:32 Phase is DOWNLOAD_SOURCE
[Container] 2021/06/14 15:05:32 CODEBUILD_SRC_DIR=/codebuild/output/src123899037/src
[Container] 2021/06/14 15:05:32 YAML location is /codebuild/readonly/buildspec.yml
[Container] 2021/06/14 15:05:32 Phase complete: DOWNLOAD_SOURCE State: FAILED
[Container] 2021/06/14 15:05:32 Phase context status code: YAML_FILE_ERROR Message: wrong number of container tags, expected 1

btw there was no version: 0.2 in buildspec file.

[Container] 2021/06/14 15:46:59 Waiting for agent ping
[Container] 2021/06/14 15:47:02 Waiting for DOWNLOAD_SOURCE
mounting '127.0.0.1:/' failed. connection reset by peer

I had to setup S3 bucket(so much for quick) in order to get to the bottom of what exactly is wrong.

Zdenek Styblik 2021/06/17 09:47

It's a trap! and its name is FIFO queue

When migrating from Redis-as-a-queue to AWS SQS and making application more integrated with the cloud, we've decided to use a FIFO queue. Intentions were good - ideally keep order of messages and prevent duplicates as low as possible. We were aware of the following excerpt from FAQ:

By design, Amazon SQS FIFO queues don't serve messages from the same message group to more than one consumer at a time. However, if your FIFO queue has multiple message groups, you can take advantage of parallel consumers

but not so much of implications, at least for django-q. We've given it a try and it failed miserably due to combo of max attempts and buggy job running way past the limit.

As for the implications(which I haven't verified), FIFO means that only one job can be executed at the time and nothing else. Do you have more than just one worker/thread in django-q? Pointless. Scaling based on “queue length”? Yes, if you emit messages into different message groups and willing to accept the fact that some messages might not be processed at all on scale down. I'm fairly sure both of these can be solved, but is it worth it? Most likely no.

Therefore, we've switched to the simple SQS queue with no content deduplication(which probably didn't work anyway with this setup) and no FIFO and possibility of duplicates. However, that's life and it works.

Zdenek Styblik 2021/06/25 20:56

How to run django-q or similar at AWS Elastic Beanstalk

This is like one year late and doesn't bring anything new nowadays(I'll explain why in a bit). I've seen couple questions here and there about how to run django-q at AWS Elastic Beanstalk(EBT) and as a matter of fact, I was facing the same conundrum.

EBT offers “web application” and “worker” environments and django-q falls, or fits into, the latter rather than former, but:

When you launch a worker environment, Elastic Beanstalk installs the necessary support files for your programming language of choice and a daemon on each EC2 instance in the Auto Scaling group. The daemon reads messages from an Amazon SQS queue. The daemon sends data from each message that it reads to the web application running in the worker environment for processing.

Daemon passing messages from the queue as HTTP requests? That won't fly with django-q(at least I'm not aware such thing is possible) and it doesn't fit into web application → queue → worker(s)/django-q architecture. Also, my customer used Redis as a queue in “runs at laptop” PoC(Redis has been replaced by SQS later on).

Ok, “worker” environment is a no-go. However, if you deploy django-q into “web application” environment, and there is nothing stopping you to do so, health of EBT application(environment) will be permanently in red. Solution? Simple wrapper with HTTP server and health endpoint. You can find such example in django-q's documentation now. No, unfortunately I haven't contributed that. No, that's not where I got this idea and what not. Why? Because it wasn't there one year ago ;)

Zdenek Styblik 2021/06/29 08:23

bloglike/2021-06.txt · Last modified: 2021/06/29 03:51 by stybla