Volunteers (hangs around EngineersMY slack)
https://engineers.my/
Monthly meetup announced on meetup.com
Get in touch via meetup.com
or
Slack us to volunteer / speak / sponsor
DevKami curated meetups: https://devkami.com/meetups/
KL meetups by Azuan (@alienxp03): http://malaysia.herokuapp.com/#upcoming
502 Errors
Major outage impacted all Cloudflare services globally. We saw a massive spike in CPU that caused primary and secondary systems to fall over. We shut down the process that was causing the CPU spike.
Service restored to normal within ~30 minutes. We’re now investigating the root cause of what happened.
Bad Config Deploy
On July 2, we deployed a new rule in our WAF Managed Rules that caused CPUs to become exhausted on every CPU core that handles HTTP/HTTPS traffic on the Cloudflare network worldwide.
…update contained a regular expression that backtracked enormously and exhausted CPU used for HTTP/HTTPS serving
US celebrates Independence Day by liberating people from social media slavery
…kidding
MalayMail$34bil acquisition
ArticleA simple terminal UI for both docker and docker-compose
Strategies boil down to either:
Pros
Cons
The most common approach to bypass the per-invocation performance penalty of the “Send” approach is to instead “Scrape” CloudWatch and X-Ray to gather metrics/logs/traces into your provider of choice.
Pro: users are able to save latency Lambda invocations
Con: build (potentially expensive) Rube Goldberg style machines to relay and scrape logs and traces from AWS’s products
…rather than force users to build elaborate systems to “scrape” or IPC messages from inside Lambda functions, AWS could provide some type of UDP listening agent on each Lambda host — these agents could perform a similar function to the existing X-Ray agent, but rather than send events to AWS’s X-Ray service, forward them to a customer owned Kinesis Stream. Maybe even call them Lambda Event Streams