Posts

  • Simple Windows 10 Environment for DevOps Engineers

    If you are working as a DevOps Engineer and want to automate the creation of your infrastructure on AWS from Windows 10, then you should install and configure a minimalist toolset to do Infrastructure as Code (IaC) tasks. Since I’m using an older Surface 3 Pro (Windows 10 with 4GB RAM and 64GB SSD), I’m going to focus on Terraform coding, leaving out Docker, K8s, Jenkins, etc. for another article.

  • Github Pages and Jekyll on Windows 10

    I have a Blog hosted on Github Pages created with Jekyll from Linux. That works perfectly and can publish posts frequently, but now I would like to do the same but from Windows 10 laptop (older Surface 3 Pro, 4GB RAM, 64GB SSD). The aim of this post is explain you how to prepare and configure Windows 10 to publish post in a new or existing static site created with Jekyll.

    To do that I’m going to follow the Jekyll on Windows guide, basically I’ll download, install and configure a Ruby+Devkit version in Windows 10.

  • Minimum Viable Security for a Kubernetised Webapp: HTTP Basic Auth on TLS - Part2

    In the “Minimum Viable Security for a Kubernetised Webapp: TLS everywhere - Part1” I used the Affordable K8s’ Terraform scripts to create a K8s Cluster with the Jetstack Cert-Manager and the NGINX Ingress Controller pre-installed, now I want to improve the security of a Webapp hosted in that Cluster according the Minimum Viable Security (MVSec) and Pareto Principle or 80/20 rule.

    In this post I’ll explain how to enable and configure HTTP Basic Authentication over TLS in the Weave Scope webapp running in the recently created K8s Cluster.

  • DevOps is to SDLC as MLOps is to Machine Learning Applications

    If you have read the previous post about Security along the Container-based SDLC, then you have noted that DevOps and Security practices should be applied and embeded along SDLC. Before we had to understand the entire software production process and sub-processes in order to apply these DevOps and Security practices. Well, in this post I’ll explain how to apply DevOps practices along Machine Learning Software Applications Development Life Cycle (ML-SDLC) and I’ll share a set of tools focusing to implement MLOps.


    Data Science (& ML) Life Cycle

  • Minimum Viable Security for a Kubernetised Webapp: TLS everywhere - Part1

    Minimum Viable Security (MVSec) is a concept borrowed from the Minimum Viable Product (MVP) concept about the Product Development Strategy and from the Pareto Principle or 80/20 rule. The MVP concept applied to IT Security means the product (application) will contain only the minimum amount (20%) of effort invested in order to prove the viability (80%) of an idea (acceptable security).

    The purpose of this post is to explain how to implement TLS everywhere to become MVSec (roughly 80% of security with 20% of working) for a Kubernetised Webapp hosted on AWS.


    Minimum Viable Security for a Kubernetised Webapp: TLS everywhere with NGINX Ingress Controller, Cert-Manager and Let's Encrypt

  • Security along the Container-based SDLC

    Nowadays, containers are becoming the standard deployment unit of software, and that in the Cloud-based Application Security world means 2 things:

    • The Software Applications are distributed into containers.
    • The minimum unit of deployment and shipment is the container.

    In other words, using containers we are adding a new element to be considered along the Software Development Life Cycle (SDLC) as a new additional piece of software (containers), and from Architectural point of view, those new pieces of software will be distributed.

    Said that, the purpose of this post is explain you how to embed Security along the Container-based SDLC (Secure-SDLC) and how to DevOps practices will help its adoption.


    Security along the Container-based SDLC - Overview

  • Building your own affordable K8s to host a Service Mesh - Part 3: Certificate Manager

    In this blog post I’ll explain how to get a X.509 TLS Certificate from Let’s Encrypt automatically during the Terraform provision time, in this way we can now invoke the services additionally on port 443 (HTTPS/TLS).
    During the Terraform execution, immediately after Kubernetes Cluster creation, the JetStack Cert-Manager is deployed in a Pod, it is who will request to Let’s Encrypt service a X.509 TLS Certificate, once completed, the JetStack Cert-Manager will inject the X.509 Certificate as a Kubernetes Secret into NGINX Ingress Controller to enbale TLS.

    At this point you must have created a Kubernetes Cluster with ExternalDNS and NGINX as Ingress Controller. If you don’t know how to achieve that, I recommend to follow these posts:

    K8s Cluster created using AWS Spot Instances - Cert-Manager and Let's Encrypt

  • Building your own affordable K8s to host a Service Mesh - Part 2: External DNS and Ingress

    In order to get an affordable Kubernetes, every part we’re going to use should be affordable too, and ones of the expensive and tricky things are the AWS Elastic Load Balancing (ELB) and the AWS Route 53 (DNS). Fortunately, Kubernetes SIGs are working to address this gap with the Kubernetes ExternalDNS.

    But what is the problem?

    Apart of it is expensive, the problem is every time I deploy a Service in Kubernetes I have to update and add a new DNS entry in the Cloud Provider’s DNS manually. Yes, of course, the process can be automated, but the idea is doing it during the provisioning time. In other words, every developer can publish theirs services adding the DNS name as annotation for that services can be called over Internet. Yes, Kubernetes brings by default a DNS but this is an internal one and it is only to work resolving DNS names over the Kubernetes Network, not for internet facing services.

    The Solution

    The Kubernetes ExternalDNS will run a program in our affordable K8s which it will synchronize exposed Kubernetes Services and Ingresses with the Cloud Provider’s DNS Service, in this case with AWS Route 53. Below you can view a high level diagram and current status of my Affordable Kubernetes Data Plane, I recommend look at first post about it.

    Service Mesh hosted using AWS Spot Instances

  • Building your own affordable K8s to host a Service Mesh - Part 1

    I want to build a Container-based Cloud to deploy any kind of workload (RESTful API, Microservices, Event-Driven, Functions, etc.) but it should be affordable, ready to use, reliable, secure and productionable. This means:

    • Productionable: should be fully functional and ready to be used as a production environment.
    • Reliable and Secure: able to improve the security level by implementing more security controls, at least fully isolated secure private networking.
    • Affordable: cheaper.
    • Ready to use: able to be automated (DevOps and IaC) with a mature management API.

    Below a high level architecture of Container-based Cloud I want to get. I will focus on the Service Mesh Data Plane.

    These requeriments restric some options, all of them using any Public Cloud Provider, but considering the AWS Spot Instances and Google Cloud Preemptible VM Instances. Unfortunately Microsoft Azure only provides Low-Priority VMs to be used from Azure Batch Service. But if you are new user, you could apply for using the Free Tier in all of 3 Cloud Providers.

  • Migrating WordPress.com's blog to GitHub Pages with Jekyll - Part 2

    In the first blog post I explained how to export your WordPress.com blog and use it to generate your static blog site to be hosted in GitHub Pages. Now, in this blog post (Part 2) I will explain how to manage the look&feel, theme, layouts and pagination of a previous migrated WordPress.com’s blog to GitHub Pages. Also I’ll explain how to convert all HTML post files, obtained by using the JekyllImport::Importers::WordpressDotCom, to Markdown format by using a simple Python script.

    Migrating WordPress.com's blog to GitHub Pages by using Jekyll Part2