article thumbnail

Serverless in 2019: From ‘Hello World’ to ‘Hello Production’

Stacks on Stacks

and patching, and scaling, and load-balancing, and orchestrating, and deploying, and… the list goes on! In fact, because serverless teams obtain significant velocity from relying largely on standard infrastructure services, many will experience a cultural reset around what it means to refactor a monolith.

article thumbnail

AWS Cost Optimization: 5 Best Practices for Your Business

Modus Create

Use the Trusted Advisor Idle Load Balancers Check to get a report of load balancers that have a request count of less than 100 over the past seven days. Then, you can delete these load balancers to reduce costs. Additionally, you can also review data transfer costs using Cost Explorer.

AWS 52
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Logs and Traces: Two Houses Unalike in Dignity

Honeycomb

The primary hosting pattern to migrate was a.NET application running on a Windows instance behind a load balancer. I had used the Honeycomb agentless CloudWatch integration to ingest structured logs from Lambda functions. I was optimistic I could adopt a similar pattern to provide visibility into our provisioning process.

AWS 52
article thumbnail

Chaos Engineering at Datadog

LaunchDarkly

You can go blow up stateless applications all day long and you can just load balance across new resources all the time. I think it’s important when you’re talking about chaos engineering to also talk about culture. Our Chaos Monkey was like a Python script in AWS Lambda. It makes this really easy to do.

article thumbnail

Automate releases from your pipelines using Infrastructure as Code

CircleCI

This is all possible due to recent culture shifts within teams and organizations as they begin embrace CI/CD and DevOps practices. apply ( lambda args : generate_k8_config ( * args )). This code also creates a Load Balancer resource that routes traffic evenly to the active Docker containers on the various compute nodes.