Just as a HTTP reverse proxy is sitting in front of a web application and a sidecar is attached to a motorcycle; a sidecar proxy is attached to a main application to extend or add functionality. A Sidecar Proxy is an application design pattern which abstracts certain features, such as inter-service communications, monitoring and security, away from the main application to ease its maintainability, resilience and scalability of the application as a whole.
In this post I will show you how to use the Sidecar Pattern to address security challenges in the Cloud Native Applications.
In the previous post Building an affordable remote DevOps desktop on AWS I shown you how to build a cheaper remote DevOps Desktop on AWS, now I’ll explain you how to do that in approximately 3 minutes, instead of 25 minutes, using Hashicorp Packer to pre-bake an AWS AMI.
If you’re going to work 100% remotely or are just tired of carrying a heavy laptop while commuting, why not spin up a DevOps Desktop PC in a public cloud and work from anywhere you want? So, if you like the idea, this post for you. In this post I’ll explain you how to build your own Remote DevOps Desktop on AWS and configure your thinner Local PC to connect to remote one.
If you are working as a DevOps Engineer and want to automate the creation of your infrastructure on AWS from Windows 10, then you should install and configure a minimalist toolset to do Infrastructure as Code (IaC) tasks. Since I’m using an older Surface 3 Pro (Windows 10 with 4GB RAM and 64GB SSD), I’m going to focus on Terraform coding, leaving out Docker, K8s, Jenkins, etc. for another article.
I have a Blog hosted on Github Pages created with Jekyll from Linux. That works perfectly and can publish posts frequently, but now I would like to do the same but from Windows 10 laptop (older Surface 3 Pro, 4GB RAM, 64GB SSD). The aim of this post is explain you how to prepare and configure Windows 10 to publish post in a new or existing static site created with Jekyll.
In the “Minimum Viable Security for a Kubernetised Webapp: TLS everywhere - Part1” I used the Affordable K8s’ Terraform scripts to create a K8s Cluster with the Jetstack Cert-Manager and the NGINX Ingress Controller pre-installed, now I want to improve the security of a Webapp hosted in that Cluster according the Minimum Viable Security (MVSec) and Pareto Principle or 80/20 rule.
If you have read the previous post about Security along the Container-based SDLC, then you have noted that DevOps and Security practices should be applied and embeded along SDLC. Before we had to understand the entire software production process and sub-processes in order to apply these DevOps and Security practices. Well, in this post I’ll explain how to apply DevOps practices along Machine Learning Software Applications Development Life Cycle (ML-SDLC) and I’ll share a set of tools focusing to implement MLOps.
Minimum Viable Security (MVSec) is a concept borrowed from the Minimum Viable Product (MVP) concept about the Product Development Strategy and from the Pareto Principle or 80/20 rule. The MVP concept applied to IT Security means the product (application) will contain only the minimum amount (20%) of effort invested in order to prove the viability (80%) of an idea (acceptable security).
The purpose of this post is to explain how to implement TLS everywhere to become MVSec (roughly 80% of security with 20% of working) for a Kubernetised Webapp hosted on AWS.
Nowadays, containers are becoming the standard deployment unit of software, and that in the Cloud-based Application Security world means 2 things:
- The Software Applications are distributed into containers.
- The minimum unit of deployment and shipment is the container.
In other words, using containers we are adding a new element to be considered along the Software Development Life Cycle (SDLC) as a new additional piece of software (containers), and from Architectural point of view, those new pieces of software will be distributed.
Said that, the purpose of this post is explain you how to embed Security along the Container-based SDLC (Secure-SDLC) and how to DevOps practices will help its adoption.
In this blog post I’ll explain how to get a X.509 TLS Certificate from Let’s Encrypt automatically during the Terraform provision time, in this way we can now invoke the services additionally on port 443 (HTTPS/TLS).
During the Terraform execution, immediately after Kubernetes Cluster creation, the JetStack Cert-Manager is deployed in a Pod, it is who will request to Let’s Encrypt service a X.509 TLS Certificate, once completed, the JetStack Cert-Manager will inject the X.509 Certificate as a Kubernetes Secret into NGINX Ingress Controller to enbale TLS.
At this point you must have created a Kubernetes Cluster with ExternalDNS and NGINX as Ingress Controller. If you don’t know how to achieve that, I recommend to follow these posts:
- Part 1 - Building your own affordable K8s to host a Service Mesh.
- Part 2 - Building your own affordable K8s - ExternalDNS and NGINX as Ingress.