Posts

  • Security along the Container-based SDLC

    Nowadays, containers are becoming the standard deployment unit of software, and that in the Cloud-based Application Security world means 2 things:

    • The Software Applications are distributed into containers.
    • The minimum unit of deployment and shipment is the container.

    In other words, using containers we are adding a new element to be considered along the Software Development Life Cycle (SDLC) as a new additional piece of software (containers), and from Architectural point of view, those new pieces of software will be distributed.

    Said that, the purpose of this post is explain you how to embed Security along the Container-based SDLC (Secure-SDLC) and how to DevOps practices will help its adoption.


    Security along the Container-based SDLC - Overview

  • Building your own affordable K8s to host a Service Mesh - Part 3: Certificate Manager

    In this blog post I’ll explain how to get a X.509 TLS Certificate from Let’s Encrypt automatically during the Terraform provision time, in this way we can now invoke the services additionally on port 443 (HTTPS/TLS).
    During the Terraform execution, immediately after Kubernetes Cluster creation, the JetStack Cert-Manager is deployed in a Pod, it is who will request to Let’s Encrypt service a X.509 TLS Certificate, once completed, the JetStack Cert-Manager will inject the X.509 Certificate as a Kubernetes Secret into NGINX Ingress Controller to enbale TLS.

    At this point you must have created a Kubernetes Cluster with ExternalDNS and NGINX as Ingress Controller. If you don’t know how to achieve that, I recommend to follow these posts:

    K8s Cluster created using AWS Spot Instances - Cert-Manager and Let's Encrypt

  • Building your own affordable K8s to host a Service Mesh - Part 2: External DNS and Ingress

    In order to get an affordable Kubernetes, every part we’re going to use should be affordable too, and ones of the expensive and tricky things are the AWS Elastic Load Balancing (ELB) and the AWS Route 53 (DNS). Fortunately, Kubernetes SIGs are working to address this gap with the Kubernetes ExternalDNS.

    But what is the problem?

    Apart of it is expensive, the problem is every time I deploy a Service in Kubernetes I have to update and add a new DNS entry in the Cloud Provider’s DNS manually. Yes, of course, the process can be automated, but the idea is doing it during the provisioning time. In other words, every developer can publish theirs services adding the DNS name as annotation for that services can be called over Internet. Yes, Kubernetes brings by default a DNS but this is an internal one and it is only to work resolving DNS names over the Kubernetes Network, not for internet facing services.

    The Solution

    The Kubernetes ExternalDNS will run a program in our affordable K8s which it will synchronize exposed Kubernetes Services and Ingresses with the Cloud Provider’s DNS Service, in this case with AWS Route 53. Below you can view a high level diagram and current status of my Affordable Kubernetes Data Plane, I recommend look at first post about it.

    Service Mesh hosted using AWS Spot Instances

  • Building your own affordable K8s to host a Service Mesh - Part 1

    I want to build a Container-based Cloud to deploy any kind of workload (RESTful API, Microservices, Event-Driven, Functions, etc.) but it should be affordable, ready to use, reliable, secure and productionable. This means:

    • Productionable: should be fully functional and ready to be used as a production environment.
    • Reliable and Secure: able to improve the security level by implementing more security controls, at least fully isolated secure private networking.
    • Affordable: cheaper.
    • Ready to use: able to be automated (DevOps and IaC) with a mature management API.

    Below a high level architecture of Container-based Cloud I want to get. I will focus on the Service Mesh Data Plane.

    These requeriments restric some options, all of them using any Public Cloud Provider, but considering the AWS Spot Instances and Google Cloud Preemptible VM Instances. Unfortunately Microsoft Azure only provides Low-Priority VMs to be used from Azure Batch Service. But if you are new user, you could apply for using the Free Tier in all of 3 Cloud Providers.

  • Migrating WordPress.com's blog to GitHub Pages with Jekyll - Part 2

    In the first blog post I explained how to export your WordPress.com blog and use it to generate your static blog site to be hosted in GitHub Pages. Now, in this blog post (Part 2) I will explain how to manage the look&feel, theme, layouts and pagination of a previous migrated WordPress.com’s blog to GitHub Pages. Also I’ll explain how to convert all HTML post files, obtained by using the JekyllImport::Importers::WordpressDotCom, to Markdown format by using a simple Python script.

    Migrating WordPress.com's blog to GitHub Pages by using Jekyll Part2

  • Setting a Python 3 local programming environment

    This post will guide you through installing Python 3, and Python 2.x, on your local Linux machine and setting up a programming environment via the command line. This post will explicitly cover the installation procedures for Ubuntu 18.04 or above, but the general principles apply to any other distribution of Debian Linux.

    Python 3

  • Android Tablet or iPad as extra monitor for Linux

    There are several applications out there that help us to use a spare Android Tablet or iPad as a external monitor in Windows PC or Mac OSX Laptop over WI-FI or using a USB cable. For example:

  • Tips & Tricks to use the Apple Magic Mouse in your Linux Ubuntu

    This a set of resources (reported issues, blogs, drivers, tips and tricks) to eager to use your Apple Magic Mouse in your Linux Ubuntu. These resources are recommedations of many people whom tried and got successfully the Apple Magic Mouse working in Linux. Next you can review documentation about how to configure it in your Linux PC:

  • Migrating WordPress.com's blog to GitHub Pages with Jekyll - Part 1

    I would like to share my experience migrating my blog site hosted in WordPress.com to GitHub Pages in 2 parts. In this blog post (Part 1) I will explain how to use Jekyll to export/import, how to configure GitHub Page site to host a fully blog as a headless Content Management System (CMS) based on Ruby. In the next blog post (Part 2) I will explain how to manage the look&feel, layouts, etc. Migrating WordPress.com's blog to GitHub Pages by using Jekyll

  • Using Weave Scope to explore Microservices Communication and Service Mesh (OpenShift and Istio)

    If you are working with ESB, Message Brokers, BPMS, SOA or Microservices, you will notice that you are solving the same problems of Standalone Applications but in different way, because all of them are different kind of Distributed Application. Those problems are:

    • Users Management, Authentication and Authorisation
    • Logging, Debugging, Monitoring and Alerting
    • Clustering, High Availability, Load Balancing, etc…