Professional Email Marketing Setup: Subdomain Strategy for Maximum Deliverability
Category: DevOps
The terraform import command allows you to import into HashiCorp Terraform resources that already existed previously in the provider we are working with, in this case AWS. However, it only allows you to import those records one by one, with one run of terraform import at a time. This, apart from being extremely tedious, in some situations becomes impractical. This is the case for the records of a Route53 DNS zone. The task can become unmanageable if we have multiple DNS zones, each one with tens or hundreds of records. In this article I offer you a bash script that will allow you to import in Terraform all the records of a Route53 DNS zone in a matter of seconds or a few minutes.
Last December Amazon announced its new EBS gp3 volumes, which offer better performance and a cost saving of 20% compared to those that have been used until now (gp2). Well, after successfully testing these new volumes with multiple clients, I can do nothing but recommend their use, because they are all advantages and in these 2 and a half months that have passed since the announcement I have not noticed any problems or side effects.
One of the biggest annoyances when working with AWS and your Internet connection has a dynamic IP is that when it changes, you immediately stop accessing to all servers and services protected by an EC2 security group whose rules only allow traffic to certain specific IP’s instead of allowing open connections to everyone (0.0.0.0.0/0).
Certainly the simplest thing to do is always allowing traffic on a given port to everyone, so that even if you have a dynamic IP on your Internet connection you will always be able to continue accessing even if it changes. But opening traffic on a port to everyone is not the right way to proceed from a security point of view, because then any attacker will be able to access that port without restrictions, and that is not what you want.
Although there are different methods for backing up MySQL and MariaDB databases, the most common and effective one is to use a native tool that both MySQL and MariaDB make available for this purpose: the mysqldump command. As its name suggests, this is a command-line executable program that allows you to perform a complete export (dump) of all the contents of a database or even all the databases in a running MySQL or MariaDB instance. Of course it also allows partial backups, i.e. only some specific tables, or even only only a subset of all the records in a table.
Creating a MySQL or MariaDB user and granting permissions to him to access a specific database and be able to write data on it is a very usual task that is necessary to perform each time you install a new application based on any of these database engines, like web applications running on top of LAMP stack. Whether it is a simple WordPress, or a more complex application tailor made, one way or another you will always have to complete these steps at some point before its deployment.
When we completely fill up an ext4 filesystem mounted on a partition hosted in an EBS volume of Amazon Web Services and we can not do anything to free space because we do not want to lose any of the stored data, the only solution is to grow up the volume and extend the associated partition up to 100% of its capacity to obtain free space again.
We start in our example with a 50 GB volume full to 100%. We want to extend it to double the size, 100 GB:
When we try to know a computer’s architecture and performance at CPU level using Linux commands like nproc or lscpu, we often find out that we are not able to properly interpret their results because we confuse terms such as physical CPU, logical CPU, virtual CPU, core, thread, socket, etc. If we add concepts like HyperThreading (not to be confused with multithreading), we are in a situation where we can not be sure how many cores our box has, we don’t understand why commands like htop indicate that we have 8 cpus when we thought we had bought a single quad-core processor, etc. In short, it’s a mess.