#WDILTW – RTFM, then RTFM again, then improve it

This week I learned two valuable aspects of Terraform I did not know.

The first is Terraform State Import. While I use terraform state to list and show state and even remove state, I was unaware you could import from a created AWS resource. It’s not actually an argument to the “terraform state” syntax, instead its “terraform import” and likely why I do not see it when I look at terraform state syntax.

% terraform state
Usage: terraform [global options] state  [options] [args]

  This command has subcommands for advanced state management.

  These subcommands can be used to slice and dice the Terraform state.
  This is sometimes necessary in advanced cases. For your safety, all
  state management commands that modify the state create a timestamped
  backup of the state prior to making modifications.

  The structure and output of the commands is specifically tailored to work
  well with the common Unix utilities such as grep, awk, etc. We recommend
  using those tools to perform more advanced state tasks.

Subcommands:
    list                List resources in the state
    mv                  Move an item in the state
    pull                Pull current state and output to stdout
    push                Update remote state from a local state file
    replace-provider    Replace provider in the state
    rm                  Remove instances from the state

I am not an expert in Terraform, and looking at the command help output shown above did not give me reference to look elsewhere, but just reading the manual can help you to learn a new feature. If you do not know a product, reading documentation and examples can be an ideal way to get started in a self-paced way.

The second is Meta-Arguments. I use lifecycle, and to be honest I have learned and forgotten about count. Count was something I was able to use to solve a very nasty cross-region kinesis stream issue, reminding me of a syntax I had since forgotten. Using coalesce and conditional expressions (aka ternary operator) can help in modules, for example.

resource "aws_rds_cluster" "demo" {
  ...
  global_cluster_identifier       = var.has_global_cluster ? local.global_cluster_identifier : ""
  master_username                 = var.has_global_cluster ? "" : var.master_username
  db_cluster_parameter_group_name = coalesce(var.db_cluster_parameter_group_name , local.db_cluster_parameter_group_name)
  ...      

However to stop the creation of the object completely, use count.

resource "aws_???" "demo_???" {
  count = var.filter_condition ? 1 : 0
  ...

And just when I thought I’d read about Meta-Arguments, I hit a new never before seen problem. Now if I’d read the summary resources page about Meta-Arguments, and looked the very next section I would have been able to likely solve this new error without having to RTFM a second time.

module.?.?.aws_rds_cluster.default: Still creating... [1h59m53s elapsed]

Error: Error waiting for RDS Cluster state to be "available": timeout while waiting for state to become 'available' (last state: 'creating', timeout: 2h0m0s)

on .terraform/modules/?/main.tf line 306, in resource "aws_rds_cluster" "default":

306: resource "aws_rds_cluster" "default" {

I did not know there was a 2 hour timeout, and I did not know you can change that with

timeouts {
    create = "4h"
    delete = "4h"
  }
}

On a number of occasions I have found documentation to not be complete or accurate online. If you find this, then submit a request to get it fixed, must sources include a link at the bottom to recommend improvements. I have had good success with submitting improvements to the AWS documentation.

#WDILTW – What can I run from my AWS Aurora database

When you work with AWS Aurora you have limited admin privileges. There are some different grants for MySQL including SELECT INTO S3 and LOAD FROM S3 that replace the loss of functionality to SELECT INTO OUTFILE and mysqldump/mysqlimport using a delimited format. While I know and use lambda capabilities, I have never executed anything with INVOKE LAMDBA directly from the database.

This week I found out about INVOKE COMPREHEND (had to look that product up), and INVOKE SAGEMAKER (which I used independently). These are machine learning capabilities that enable you to build custom integrations using Comprehend and SageMaker. I did not have any chance to evaluate these capabilities so I am unable to share any use cases or experiences. There are two built-in comprehend functions aws_comprehend_detect_sentiment() and aws_comprehend_detect_sentiment_confidence(), a likely future starting place. Sagemaker is invoked as an extension of a CREATE FUNCTION that provides the ALIAS AWS_SAGEMAKER_INVOKE_ENDPOINT syntax.

Also available are some MySQL status variables including Aurora_ml_logical_response_cnt, Aurora_ml_actual_request_cnt, Aurora_ml_actual_response_cnt, Aurora_ml_cache_hit_cnt, Aurora_ml_single_request_cnt.

Some googling found an interesting simple example, calculating the positive/negative sentiment and confidence of sentences of text. I could see this as useful for analyzing comments. I’ve included the example from this site here to encourage my readers to take a look as I plan to do. Post IAM configuration I will be really curious to evaluate the responsiveness of this example. Is this truly a batch only operation or could you return some meaningful response timely?

This also lead to bookmarking for reading https://awsauroralabsmy.com/, https://github.com/aws-samples/amazon-aurora-labs-for-mysql/ and https://squidfunk.github.io/mkdocs-material/ all from this one page.

#WDILTW – AWS RDS Proxy

This week I was evaluating AWS RDS Proxy. If you are familiar with the Relational Database Service (RDS) and use MySQL or Postgres, this is an additional option to consider.

Proxies in general by the name accept incoming requests and perform some management before those requests are forwarded to the ultimate target.

RDS proxy takes incoming database connections and can perform several capabilities including collection pooling and capping the total database connections with each configured proxy holding a percentage of the total connections for the target cluster. The proxy can handle routing only for writer instances (at this time) to minimize a planned or unplanned failover. The RDS proxy however does not address the underlying problem of too many connections to the database, it just adds another layer, that is or may be more configurable or tunable than an application requesting connections.

The RDS Proxy is automatically Highly Available (HA). You can determine this by looking at the host IPs of the MySQL processlist. I have yet to identify any other means of seeing if a connection is a proxy connection at the database level if you are using the same credentials. RDS Proxy does give you the ability via Secrets Manager to connect as a different user. You can specify a connection initialization query. I used a SET variable so that application could determine if it was using a Proxy however that is of little benefit in server connection management.

The RDS proxy can enforce TLS, something which in my opinion should always be used for application to data store communications, but historically has been overlooked at practically every company I have worked for or consulted to. Just because you are communicating within a VPC does not protect your communications from actors within your VPC. I can remember at a prior employment the disappointment of cross-region replication that was encrypted being dropped because it was too hard to migrate or manage. That shows an all too common problem of laziness over security.

If you are new to a particular technology the age of the Internet gives you search capabilities to find numerous articles. If you search for anything AWS you will generally always get as the top results the official pages, it takes some digging to find other articles. Prior to this lesson I had only read about RDS Proxy, I had never actually setup one.

When anybody is learning something new, I like to say your value add is not to just read an article, but reproduce and then adapt or enhance. This Amazon example is no different. Repeating each step showed multiple errors in syntax which I can contribute back as comments. If this was open source code, you could contribute a pull request (PR). The good news is the first example of configuring a proxy includes by GUI and CLI commands. I always like to do my work on the command line, even the first iteration. You cannot scale a human moving a mouse around and clicking. What I found however was that the official AWS CLI lacked a key component of the proxy setup around group targets. The UI provides a capability that the CLI did not. Another discrepancy was when I was making modifications to the proxy in the GUI I would get an error, but I could make that change via the CLI. These discrepancies are an annoyance for consistency and first evaluation.

So what was the outcome of my evaluation? First I was able to demonstrate I could add a proxy to an existing cluster in one of our test environments and direct traffic from a mysql client thru the proxy to the target database. I was able to use Secrets Manager (SSM) to enforce credentials for authorization. I did not look into Identity Access Management (IAM) roles support. I was able to benchmark with sysbench simulated load to compare latency of the proxy traffic versus direct traffic. I have simplified my examples so that anybody can run these tests themselves for simple validation.

I could enforce TLS communications for the mysql client testing, however our company internal http proxy caused the usual self signed certificate issues with sysbench, something I really need to master. Surprisingly I looked at what options sysbench gave me for SSL options (side bar we should always refer to this as TLS instead of SSL), but the defined options for the installed recent version are still using the ssl name. The scope of options differed from the source code online so a question as to why? That’s the great thing about open source, you can read the code. You may have even met the author at a conference presentation.

Where the evaluation hit a business impact was in comparative performance. I am still awaiting an AWS support response to my evaluation.

What’s next is to get an application team to evaluate end to end database operations, easily done as Route 53 DNS is used for endpoint communications.
Where I got stuck was incorporating the setup of RDS proxy within Terraform We currently use version 12. While there was the aws_db_proxy module, I needed an updated version of the aws provider to our environment. The official Hashicorp documentation of the resource really does not highlight the complexity necessary to create a proxy. While you will have already configured a VPC, and subnets, even Ingres security groups and secrets which all parts necessary for RDS cluster, you need a number of integrated pieces.

You will need an IAM role for your proxy, but that role requires a policy to use KMS to get the secrets you wish to use for authorization. This interdependency of KMS and secret ARNs make is difficult to easily launch a RDS proxy as you would an RDS aurora cluster. Still it’s a challenge for something else to do. The added complexity is the RDS proxy also needs an authorization argument, for example the –auth argument in the AWS CLI. I see this as a complexity for management of RDS users that you wish to also be configured for use in the proxy.

As with any evaluation or proof of concept (POC) the devil is in the details. How do you monitor your new resources, what logging is important to know, what types of errors can happen, and how do you address these.

Another issue I had was the RDS proxy required a new version of the AWS client in order to run RDS commands such as describe-db-proxies. That adds an additional administrative dependency to be rolled out.

Proxies for MySQL have been around for decades, I can remember personally working on the earliest version of MySQL Proxy at MySQL Inc back in 2007. The gold standard if you use MySQL, is ProxySQL by Sysown’s Ren√© Canna√≤. This is a topic for a different discussion.

Checkout my code for this work.

Reading

An unexplained connection experience

The “Too many connections” problem is a common issue with applications using excessive permissions (and those that grant said global permissions). MySQL will always grant a user with SUPER privileges access to a DB to investigate the problem with a SHOW PROCESSLIST and where you can check the limits. I however found the following.

mysql> show global variables like 'max_connections';
+-----------------+-------+
| Variable_name   | Value |
+-----------------+-------+
| max_connections | 2000  |
+-----------------+-------+
1 row in set (0.00 sec)

mysql> show global status like 'max%';
+----------------------+-------+
| Variable_name        | Value |
+----------------------+-------+
| Max_used_connections | 6637  |
+----------------------+-------+
1 row in set (0.00 sec)

How can the max_used_connection exceed max_connections? This is possible because you can dynamically change max_connections in a normal MySQL environment. However ,this is AWS RDS where you cannot change variables dynamically via mysql client. You can via other command line options but this has not happened. Furthermore, this server is using the defauly.mysql.5.5 parameter group to further validate the claim that it has not been changed.

I do not have an answer for the client in this case.

I would also add this as another ding on the usability of RDS in production environments. I was locked out of the DB for a long time, and with no visibility of what was going on. The only options were wait, or restart the server. RDS does not provide this level of visibility of the processlist using a privileged user that could see what was going on. Perhaps an interface they should consider in future.

Additional DB objects in AWS RDS

To expand on Jervin’s Default RDS Account Privileges, RDS for MySQL provides a number of routines and triggers defined the the ‘mysql’ meta schema. These help in various tasks because the SUPER privilege is not provided.

SELECT routine_schema,routine_name
FROM information_schema.routines;
+----------------+-----------------------------------+
| routine_schema | routine_name                      |
+----------------+-----------------------------------+
| mysql          | rds_collect_global_status_history |
| mysql          | rds_disable_gsh_collector         |
| mysql          | rds_disable_gsh_rotation          |
| mysql          | rds_enable_gsh_collector          |
| mysql          | rds_enable_gsh_rotation           |
| mysql          | rds_kill                          |
| mysql          | rds_kill_query                    |
| mysql          | rds_rotate_general_log            |
| mysql          | rds_rotate_global_status_history  |
| mysql          | rds_rotate_slow_log               |
| mysql          | rds_set_configuration             |
| mysql          | rds_set_gsh_collector             |
| mysql          | rds_set_gsh_rotation              |
| mysql          | rds_show_configuration            |
| mysql          | rds_skip_repl_error               |
+----------------+-----------------------------------+
15 rows in set (0.00 sec)

SELECT trigger_schema, trigger_name,
          CONCAT(event_object_schema,'.',event_object_table) AS table_name,
          CONCAT(action_timing,' ',event_manipulation) AS trigger_action
FROM information_schema.triggers;
+----------------+--------------+------------+----------------+
| trigger_schema | trigger_name | table_name | trigger_action |
+----------------+--------------+------------+----------------+
| mysql          | block_proc_u | mysql.proc | BEFORE UPDATE  |
| mysql          | block_proc_d | mysql.proc | BEFORE DELETE  |
| mysql          | block_user_i | mysql.user | BEFORE INSERT  |
| mysql          | block_user_u | mysql.user | BEFORE UPDATE  |
| mysql          | block_user_d | mysql.user | BEFORE DELETE  |
+----------------+--------------+------------+----------------+