Digital Tech Trek Digest [#Issue 2024.06]

In this past week, I’ve been able to not just read or watch content online but meet people in person. In Brussels, first at the MySQL Belgian Days 2024 event, followed by FOSDEM 2024.

There was a wide array of presentations covering many different topics, this is just a summary. Fred talked history of Command Line Monitoring and an intro to the new player Dolphie. Dave Stokes talked security, Sunny Bains gave us a brain dump of TiDB scalable architecture. We got an update on PMM and MySQL on k8s from Peter Zaitsev as well as a chat about his new product coroot. And then a great intro to a new generation of online schema change at scale with Sprit by Morgan Tocker. Alex Rubin shows us how not how to hack MySQL, but how MySQL can hack you. We have all crossed paths as MySQL Inc. employees or MySQL community members since 2006.

Marcelo Altmann gave us a detailed intro of a new era of caching with ReadySet. We also heard updates on Vitess. And that was just Day 1 presentations. The evening event was at the incredibly wall-to-wall packed Delirium Café, sponsored by ReadySet which we offer great thanks and cheers.

Day 2 was packed with great content about MySQL Shell, MySQL Heatwave ML and Vector, MySQL Router, and the MySQL optimizer from many well-known Oracle MySQLers before amazing awards, Belgian beer, and black vodka of course.

Congratulations Giuseppe Maxia on your MySQL Legends award at MySQL Belgium Days 2024. It is well deserved for all of your community contributions over the decades.

Check out the details at Unveiling the Highlights: A Look Back at MySQL Belgian Days 2024.

Saturday and Sunday were FOSDEM 24 and its usual location. So many people crossing the university, tunnels, and weird transit paths between all the university lecture halls it can feel like a blur. For the first time, I had no fixed agenda so I could check out random talks on random topics.

A shout-out also to so many people I know and some new people I met. Colin Charles, Alkin Tezuysal, Walter Heck, Charly Batista, Robert Hodges, Jens Bollmann, Monty Widenius, Matthias Crauwels, Michael Pope, Marcelo Altmann, Emerson Gaudencio, Aldo Junior and tons more I have forgotten to mention by name. Many conversations also with random community people I didn’t even get names for example the team at Canonical.

About “Digital Tech Trek Digest”

Most days I take some time early in the morning to scan my inbox newsletters, the news, LinkedIn, or other sources to read something new covering professional and personal topics of interest. Turning what I read into some actionable notes in a short committed time window is a summary of what I learned, what I should learn and use, or what is of random interest.

Some of my regular sources include TLDR, Forbes Daily, ThoughWorks Podcasts, Daily Dose of Data Science and BoringCashCow. Also Scientific American Technology, Fareed’s Global Briefing, Software Design: Tidy First? by Kent Beck, Last Week in AWS to name a few.

Mastering MySQL 5.7 EOL migrations

In a recent podcast on Mastering EOL Migrations: Lessons learned from MySQL 5.7 to 8.0 I discuss with my colleague Adam North not only the technical issues that become a major migration but also key business and management requirements with having a well-articulated strategy that covers:

  • Planning
  • Testing
  • Be Prepared
  • Proactiveness

Having a plan is key to any significant task including data migrations. You should heed the warnings and the deprecations and consider all potential downstream product impacts such as connector upgrades. The plan includes a timeline but also needs to define all the stakeholders both technical and business, the definition of a successful migration, and most importantly the decision tree for a non-successful migration that would include any outage, failback, rollback, or fix-forward requirements.

Test, Test, Test. Leveraging the simple design pattern of read-write splitting (hint: if your application does not support this, it should) you get to test with minimal risk all of your application reads and with real load from 1% to 100%. You can validate all writes but this does not match concurrency, however, you can emulate load testing and using this two-way door strategy, verify and prevent many common problems before the decision point of failover.

Being prepared is assuming your migration will fail, rather than assuming it will succeed. Rehearsal of all steps that are documented and reproducible. Validating that your backup and recovery strategy is still optimal and operational with the new version, preparing supporting staff for availability before, during, and after the migration. There are probably not many technologists that can say, “Well that was a boring, uneventful successful migration”. The question is why not?

Being proactive is just as important. Leaving a large migration to the last minute is procrastination and a cause of unneeded stress during a non-successful migration. The Meltdown/Spectre vulnerabilities are one example of a highly impactful event outside of your control that sidelined entire teams in many companies for months. Does an outage of your cloud provider impact your uptime requirements and force you to delay a last-minute migration due to customer SLA obligations? While being prepared is for the reasons you could think of, being proactive and prepared is for the situations you have not thought of.

Having solid architectural design practices will aid greatly in many critical business requirements of uptime, read-only mode, scale-out, scale-up, and sharding. These design patterns also greatly enhance the likelihood of a successful database migration.

We have also created a Checklist to cover the planning and execution of a migration. Any input is welcome.

You can check out the video podcast on YouTube or listen with your favorite podcast tool.

Data Masking 101

I continue to dig up and share this simple approach for production data masking via SQL to create testing data sets. Time to codify it into a post.

Rather than generating a set of names and data from tools such as Mockaroo, it is more practical to use actual data for a variety of testing reasons.

The SQL below is a self-explanatory approach of removing Personal Identifiable Information (PII), but keeping data relevant. I use this approach for a number of reasons.

  • We are using production data rather than synthetic data. Data volume, distribution, and additional column values are realistic. This is a subset of an example, but dates and locations are therefore realistic
  • Indexes (and unique indexes) still work, and distribution across the index is adequate for searching. Technically the index will be a little larger in disk footprint.
  • You cannot reverse engineer the masked value into a real value with just this data set. An engineer in a test environment cannot obtain the underlying information.
  • If you identify an issue with data quality for any row of data, there is a way to present the uniqueness of that row. This enables a person with production access to match the underlying row. Of course, any unique identifier (auto increment or UUID) should also be modified to mask real data.


SELECT CONCAT(SUBSTR(first_name,1,2),REPEAT('*',LENGTH(first_name)-2)) AS first_name,
CONCAT(SUBSTR(last_name,1,3),REPEAT('*',LENGTH(last_name)-3),' ', SUBSTRING(MD5(CONCAT(first_name,last_name)),1,6)) AS last_name,
CONCAT(SUBSTR(organization,1,3),REPEAT('*',LENGTH(organization)-3),' ', SUBSTRING(MD5(CONCAT(organization)),1,6)) AS organization,
created, country
FROM customer
LIMIT 10;

+------------+--------------------+------------------+---------------------+---------+
| first_name | last_name | organization | created | country |
+------------+--------------------+------------------+---------------------+---------+
| Sa**** | Cor**** 4c23cd | Ski*** d21420 | 2022-09-20 03:30:14 | PH |
| Fu**** | Wat*** 8b97de | Jax***** e629c2 | 2022-04-08 03:20:22 | BY |
| Mo**** | Zis***** b11d94 | Rhy**** b4073a | 2022-10-06 15:58:38 | IR |
| So**** | Bad** 232cc2 | Rhy*** 1734bd | 2022-02-01 07:35:39 | ID |
| Ni***** | Ter***** d9ffb5 | Wor****** 6e476c | 2021-11-08 17:07:34 | IL |
| Ka****** | Scr***** 9201db | Jax**** 481fd8 | 2022-08-18 19:17:54 | BR |
| Li*** | Coz** 0447f6 | Nlo**** 11da59 | 2022-07-29 06:47:56 | HR |
| Ch***** | Hal******** f5d9c8 | Zoo**** c6e07d | 2022-09-28 04:54:30 | UA |
| Er****** | Ste******* d005f2 | Eid** ffc305 | 2022-04-28 18:50:11 | PT |
| Fo** | O'S***** b35c44 | Buz**** 2c8598 | 2022-09-11 02:05:55 | RU |
+------------+--------------------+------------------+---------------------+---------+

AWS RDS Aurora wish list

I’ve had this list on a post-it note on my monitor for all of 2022. I figured it was time to write it down, and reuse the space.

In summary, AWS suffers from the same problem that almost every other product does. It sacrifices improved security for backward compatibility of functionality. IMO this is not in the best practices of a data ecosystem that is under constant attack.

  • Storage should be encrypted by default. When you launch an RDS cluster its storage is not encrypted. This goes against their own AWS Well-Architected Framework Section 2 – Security.
  • Plain text passwords. To launch a cluster you must specify a password in plain text on the command line, again not security best practice. At least change this to using a known secret from AWS secrets manager.
  • TLS for administrative accounts should be the only option. The root user should only be REQUIRE SSL (MySQL syntax).
  • Expanding on the AWS secrets manager usage for passwords, there should not need to be lambda code and cloudwatch cron event for rotation, it should just be automatically built in.
  • The awscli has this neat wait command that will block until you can execute the next statement in a series of sequential events to prepare and launch a cluster, but it doesn’t work for create-db-cluster. You have to build in your own manual “wait” until “available” process.
  • In my last position, I was unable to enforce TLS communications to the database from the application. This insecure practice is a more touchy situation, however, there needs to be some way to ensure security best practices over application developer laziness in the future.
  • AWS has internal special flags that only AWS support can set when say you have a bug in a version. Call it a per-client feature flag. However, there is no visibility into what is set, which account, which cluster, etc. Transparency is of value so that the customer knows to get that special flag unset after minor upgrades.
  • When you launch a new RDS Cluster, for example, MySQL 2.x, you get the oldest version, back earlier in the year it was like 2.7.2, even when 2.10.1 was released. AWS should be using a default version when only an engine is specified as a more current version. I would advocate the latest version is not the automatic choice, but it’s better to be more current.
  • the ALTER SYSTEM CRASH functionality is great, but it’s incomplete. You cannot for example crash a global cluster, forcing a region-specific failover. If you have a disaster resiliency plan that is multi-region it’s impossible to actually test it. You can emulate a controlled failover, but this is a different use case to a real failover (aka Dec 2021)
  • Use arn when it’s required not id. This goes back to my earlier point over maximum compatibility over usability, but when a --db-instance-identifier, or --db-instance-identifier requires the value to be the ARN, then the parameter should be specific. IMO –identifier is what you use for that argument, e.g. --db-cluster-identifier. When you specify for example --replication-source-identifier this must be (as per docs) “The Amazon Resource Name (ARN) of the source DB instance or DB cluster if this DB cluster is created as a read replica.” It should then be --replication-source-arn. There are a number of different occurrences of this situation.

SELECT 1

If you have worked with an RDBMS for some time, you will likely have come across the statement SELECT 1.

However, rarely is it correctly explained to engineers what the origin of SELECT 1 is, and why it’s useless and wasteful? A google search is not going to give you the response you would hope, these ranked responses are just as useless as the statement itself.

Bloat

Seeing a SELECT 1 confirms two things. First you are using a generic ORM framework, quote, and second, you have never optimized your SQL traffic patterns.

“Frameworks generally suck.
They CLAIM to improve the speed of development and abstract the need to know SQL.
The REALITY is the undocumented cost to sub-optimal performance, especially with data persistence.”

Connection Pooling

SELECT 1 comes from early implementations of connection pooling.

What is a connection pool? Rather than a new request or call getting a new database connection each time you wanted to return some data, programming languages implemented a cache with a pre-loaded pool of pre-established database connections. The intended goal is to reduce the execution time of an initial expensive operation of getting a new database connection if you were retrieving data from a simple SELECT statement. If intelligent enough (many are not), these pools would include features such as a low watermark, a high watermark, a pruning backoff of idle connections, and an ability to flush all connections.

When your code wanted to access the database to retrieve data, it would first ask the connection pool for an available connection from its pool, mark the connection as in-use and provide that for subsequent consumption.

Here is a simple example of the two queries that would actually be necessary to retrieve one piece of information.

SELECT 1
SELECT email_address, phone, position, active FROM employee where employee_id = ?

Staleness

SELECT 1 was implemented as the most light-weight SQL statement (i.e., minimal parsing, privilege checking, execution) that would validate that your connection was still active and usable. If SELECT 1 failed, i.e. a protocol communication across your network, the connection could be dropped from the connection pool, and a new connection from the pool could be requested. While this may appear harmless, it leads to multiple code in-efficiencies, a topic for a subsequent discussion.

Failed error handling

SELECT 1 was a lazy and flawed means to perform error handling. In reality, every single SQL statement requires adequate error handling, any statement can fail at any time to complete. In the prior example, what happens if the SELECT 1 succeeds but a simple indexed SELECT statement fails? This anti-pattern also generally shows that error handling is inconsistent and highly duplicated rather than at the correct position in the data access path.

By definition, error handling is needed in an abstraction function for all SQL statements, and it needs to handle all types of error handling including the connection no longer valid, connection terminated, timed out, etc.

If you had the right error handling SELECT 1 would then be redundant, and as I stated useless. You simply run the actual SELECT statement and handle any failure accordingly.

High availability

In today’s cloud-first architectures where high availability consists of multiple availability zones and multiple regions where application A can communicate with database B, every unneeded network round-trip in a well-tuned system is wasteful, i.e. it is costing you time to render a result quicker. We all know studies have shown that slow page loads drive users away from your site.

The cost of the cloud

This AWS Latency Monitoring grid by Matt Adorjan really shows you the impact that physics has on your resiliency testing strategy when application A and database B are geographically separated and you just want one piece of information.

Conclusion

The continued appearance of SELECT 1 is a re-enforcement that optimizing for performance is a missing skill for the much larger engineering code-writing workforce that have lost the ability for efficiency. It is also another easy win that becomes an unnecessary battle for Data Architects to ensure your organization provides a better customer experience.

What does the MySQL mysqlsh util.checkForServerUpgrade() execute

During a recent Aurora MySQL 8 upgrade process, a number of validation checks have failed. This is an analysis of the error message “present in INFORMATION_SCHEMA’s INNODB_SYS_TABLES table but missing from TABLES table”.

Some background

During a Major Upgrade from Aurora MySQL 5.7 to Aurora MySQL 8.0 the cluster instances were left in an incompatible-parameters state. The upgrade-prechecks.log shed some more light on the situation with

{
            "id": "schemaInconsistencyCheck",
            "title": "Schema inconsistencies resulting from file removal or corruption",
            "status": "OK",
            "description": "Error: Following tables show signs that either table datadir directory or frm file was removed/corrupted. Please check server logs, examine datadir to detect the issue and fix it before upgrade",
            "detectedProblems": [
                {
                    "level": "Error",
                    "dbObject": "flinestones.fred",
                    "description": "present in INFORMATION_SCHEMA's INNODB_SYS_TABLES table but missing from TABLES table"
                }
            ]
        }, 

For anonymity the troublesome table here is played by flinestones.fred

This error could be reproduced more quickly with the util.checkForServerUpgrade() check that saves the creation of a snapshot of your cluster, restore from the snapshot cluster, then the launch cluster instance path.

18) Schema inconsistencies resulting from file removal or corruption
  Error: Following tables show signs that either table datadir directory or frm
    file was removed/corrupted. Please check server logs, examine datadir to
    detect the issue and fix it before upgrade

  mysql.rds_heartbeat2 - present in INFORMATION_SCHEMA's INNODB_SYS_TABLES
    table but missing from TABLES table
  flinstones.fred -
    present in INFORMATION_SCHEMA's INNODB_SYS_TABLES table but missing from
    TABLES table 

As I am using the MySQL community mysqlsh tool with a managed AWS RDS MySQL cluster, I have discounted any rds specific messages.

Back to investigating the cause. Some basic spot checks within the Cluster confirmed this mismatch.

mysql > desc flinstones.fred;
ERROR 1146 (42S02): Table flinstones.fred ' doesn't exist

mysql > select * from information_schema.innodb_sys_tables where name = ' flinstones/fred';

*results*
(1 row)

A closer inspection of the Aurora MySQL error log re-iterated there was some issue.

$ aws rds download-db-log-file-portion --db-instance-identifier ${INSTANCE_ID} --log-file-name error/mysql-error-running.log --output text

... 
[Warning] InnoDB: Tablespace 'flinstones/fred' exists in the cache with id 5233285 != 4954605
...

What is this check

It is easy enough to look at the SQL behind this using open-source software, you go to the source and look at the SQL https://github.com/mysql/mysql-shell .. upgrade_check.cc. As the message is near identical to what AWS provides I am making an educated assumption the check is the same.

// clang-format off
std::unique_ptr
Sql_upgrade_check::get_schema_inconsistency_check() {
  return std::make_unique(
      "schemaInconsistencyCheck",
      "Schema inconsistencies resulting from file removal or corruption",
      std::vector{
       "select A.schema_name, A.table_name, 'present in INFORMATION_SCHEMA''s "
       "INNODB_SYS_TABLES table but missing from TABLES table' from (select "
       "distinct "
       replace_in_SQL("substring_index(NAME, '/',1)")
       " as schema_name, "
       replace_in_SQL("substring_index(substring_index(NAME, '/',-1),'#',1)")
       " as table_name from "
       "information_schema.innodb_sys_tables where NAME like '%/%') A left "
       "join information_schema.tables I on A.table_name = I.table_name and "
       "A.schema_name = I.table_schema where A.table_name not like 'FTS_0%' "
       "and (I.table_name IS NULL or I.table_schema IS NULL) and A.table_name "
       "not REGEXP '@[0-9]' and A.schema_name not REGEXP '@[0-9]';"},
      Upgrade_issue::ERROR,
      "Following tables show signs that either table datadir directory or frm "
      "file was removed/corrupted. Please check server logs, examine datadir "
      "to detect the issue and fix it before upgrade");
}

Ok, that’s a little more difficult to read than plain text, and what if I wanted to review other SQL statements this could become tedious.

Gather the SQL statements executed by util.checkForServerUpgrade()

Let’s use a more straightforward means of capturing SQL statements, the MySQL general log.

MYSQL_PASSWD=$(date | md5sum - | cut -c1-20)

docker network create -d bridge mynetwork
docker run --name mysql57 -e MYSQL_ROOT_PASSWORD="${MYSQL_PASSWD}" -d mysql:5.7
docker network connect mynetwork mysql57
docker inspect mysql57 | grep "IPAddress"
IP=$(docker inspect mysql57 | grep '"IPAddress":' | head -1 | cut -d'"' -f4)
docker exec -it mysql57 mysql -uroot -p${MYSQL_PASSWD} -e "SET GLOBAL general_log=1"
docker exec -it mysql57 mysql -uroot -p${MYSQL_PASSWD} -e "SHOW GLOBAL VARIABLES LIKE 'general_log_file'"
GENERAL_LOG_FILE=$(docker exec -it mysql57 mysql -uroot -p${MYSQL_PASSWD} -e "SHOW GLOBAL VARIABLES LIKE 'general_log_file'" | grep general_log_file | cut -d'|' -f3)


docker run --name mysql8 -e "MYSQL_ALLOW_EMPTY_PASSWORD=yes" -d mysql/mysql-server
docker exec -it mysql8 mysqlsh -h${IP} -uroot -p${MYSQL_PASSWD} --js -- util checkForServerUpgrade | tee check.txt

docker exec -it mysql57 grep Query ${GENERAL_LOG_FILE} | cut -c41- | tee check.sql


# Cleanup
docker stop mysql8 && docker rm mysql8
docker stop mysql57 && docker rm mysql57
docker network rm mynetwork

And we are left with the output of util.checkForServerUpgrade() and the SQL of all checks including of said statement:

check.sql

SET NAMES 'utf8mb4'
select current_user()
SELECT PRIVILEGE_TYPE, IS_GRANTABLE FROM INFORMATION_SCHEMA.USER_PRIVILEGES WHERE GRANTEE = '\'root\'@\'%\''
SELECT PRIVILEGE_TYPE, IS_GRANTABLE, TABLE_SCHEMA FROM INFORMATION_SCHEMA.SCHEMA_PRIVILEGES WHERE GRANTEE = '\'root\'@\'%\'' ORDER BY TABLE_SCHEMA
SELECT PRIVILEGE_TYPE, IS_GRANTABLE, TABLE_SCHEMA, TABLE_NAME FROM INFORMATION_SCHEMA.TABLE_PRIVILEGES WHERE GRANTEE = '\'root\'@\'%\'' ORDER BY TABLE_SCHEMA, TABLE_NAME
select @@version, @@version_comment, UPPER(@@version_compile_os)
SET show_old_temporals = ON
SELECT table_schema, table_name,column_name,column_type FROM information_schema.columns WHERE column_type LIKE 'timestamp /* 5.5 binary format */'
SET show_old_temporals = OFF
select SCHEMA_NAME, 'Schema name' as WARNING from INFORMATION_SCHEMA.SCHEMATA where SCHEMA_NAME in ('ADMIN', 'CUBE', 'CUME_DIST', 'DENSE_RANK', 'EMPTY', 'EXCEPT', 'FIRST_VALUE', 'FUNCTION', 'GROUPING', 'GROUPS', 'JSON_TABLE', 'LAG', 'LAST_VALUE', 'LEAD', 'NTH_VALUE', 'NTILE', 'OF', 'OVER', 'PERCENT_RANK', 'PERSIST', 'PERSIST_ONLY', 'RANK', 'RECURSIVE', 'ROW', 'ROWS', 'ROW_NUMBER', 'SYSTEM', 'WINDOW', 'LATERAL', 'ARRAY' ,'MEMBER' )
SELECT TABLE_SCHEMA, TABLE_NAME, 'Table name' as WARNING FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE != 'VIEW' and TABLE_NAME in ('ADMIN', 'CUBE', 'CUME_DIST', 'DENSE_RANK', 'EMPTY', 'EXCEPT', 'FIRST_VALUE', 'FUNCTION', 'GROUPING', 'GROUPS', 'JSON_TABLE', 'LAG', 'LAST_VALUE', 'LEAD', 'NTH_VALUE', 'NTILE', 'OF', 'OVER', 'PERCENT_RANK', 'PERSIST', 'PERSIST_ONLY', 'RANK', 'RECURSIVE', 'ROW', 'ROWS', 'ROW_NUMBER', 'SYSTEM', 'WINDOW', 'LATERAL', 'ARRAY' ,'MEMBER' )
select TABLE_SCHEMA, TABLE_NAME, COLUMN_NAME, COLUMN_TYPE, 'Column name' as WARNING FROM information_schema.columns WHERE TABLE_SCHEMA not in ('information_schema', 'performance_schema') and COLUMN_NAME in ('ADMIN', 'CUBE', 'CUME_DIST', 'DENSE_RANK', 'EMPTY', 'EXCEPT', 'FIRST_VALUE', 'FUNCTION', 'GROUPING', 'GROUPS', 'JSON_TABLE', 'LAG', 'LAST_VALUE', 'LEAD', 'NTH_VALUE', 'NTILE', 'OF', 'OVER', 'PERCENT_RANK', 'PERSIST', 'PERSIST_ONLY', 'RANK', 'RECURSIVE', 'ROW', 'ROWS', 'ROW_NUMBER', 'SYSTEM', 'WINDOW', 'LATERAL', 'ARRAY' ,'MEMBER' )
SELECT TRIGGER_SCHEMA, TRIGGER_NAME, 'Trigger name' as WARNING FROM INFORMATION_SCHEMA.TRIGGERS WHERE TRIGGER_NAME in ('ADMIN', 'CUBE', 'CUME_DIST', 'DENSE_RANK', 'EMPTY', 'EXCEPT', 'FIRST_VALUE', 'FUNCTION', 'GROUPING', 'GROUPS', 'JSON_TABLE', 'LAG', 'LAST_VALUE', 'LEAD', 'NTH_VALUE', 'NTILE', 'OF', 'OVER', 'PERCENT_RANK', 'PERSIST', 'PERSIST_ONLY', 'RANK', 'RECURSIVE', 'ROW', 'ROWS', 'ROW_NUMBER', 'SYSTEM', 'WINDOW', 'LATERAL', 'ARRAY' ,'MEMBER' )
SELECT TABLE_SCHEMA, TABLE_NAME, 'View name' as WARNING FROM INFORMATION_SCHEMA.VIEWS WHERE TABLE_NAME in ('ADMIN', 'CUBE', 'CUME_DIST', 'DENSE_RANK', 'EMPTY', 'EXCEPT', 'FIRST_VALUE', 'FUNCTION', 'GROUPING', 'GROUPS', 'JSON_TABLE', 'LAG', 'LAST_VALUE', 'LEAD', 'NTH_VALUE', 'NTILE', 'OF', 'OVER', 'PERCENT_RANK', 'PERSIST', 'PERSIST_ONLY', 'RANK', 'RECURSIVE', 'ROW', 'ROWS', 'ROW_NUMBER', 'SYSTEM', 'WINDOW', 'LATERAL', 'ARRAY' ,'MEMBER' )
SELECT ROUTINE_SCHEMA, ROUTINE_NAME, 'Routine name' as WARNING FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_NAME in ('ADMIN', 'CUBE', 'CUME_DIST', 'DENSE_RANK', 'EMPTY', 'EXCEPT', 'FIRST_VALUE', 'FUNCTION', 'GROUPING', 'GROUPS', 'JSON_TABLE', 'LAG', 'LAST_VALUE', 'LEAD', 'NTH_VALUE', 'NTILE', 'OF', 'OVER', 'PERCENT_RANK', 'PERSIST', 'PERSIST_ONLY', 'RANK', 'RECURSIVE', 'ROW', 'ROWS', 'ROW_NUMBER', 'SYSTEM', 'WINDOW', 'LATERAL', 'ARRAY' ,'MEMBER' )
SELECT EVENT_SCHEMA, EVENT_NAME, 'Event name' as WARNING FROM INFORMATION_SCHEMA.EVENTS WHERE EVENT_NAME in ('ADMIN', 'CUBE', 'CUME_DIST', 'DENSE_RANK', 'EMPTY', 'EXCEPT', 'FIRST_VALUE', 'FUNCTION', 'GROUPING', 'GROUPS', 'JSON_TABLE', 'LAG', 'LAST_VALUE', 'LEAD', 'NTH_VALUE', 'NTILE', 'OF', 'OVER', 'PERCENT_RANK', 'PERSIST', 'PERSIST_ONLY', 'RANK', 'RECURSIVE', 'ROW', 'ROWS', 'ROW_NUMBER', 'SYSTEM', 'WINDOW', 'LATERAL', 'ARRAY' ,'MEMBER' )
select SCHEMA_NAME, concat('schema''s default character set: ',  DEFAULT_CHARACTER_SET_NAME) from INFORMATION_SCHEMA.schemata where SCHEMA_NAME not in ('information_schema', 'performance_schema', 'sys') and DEFAULT_CHARACTER_SET_NAME in ('utf8', 'utf8mb3')
select TABLE_SCHEMA, TABLE_NAME, COLUMN_NAME, concat('column''s default character set: ',CHARACTER_SET_NAME) from information_schema.columns where CHARACTER_SET_NAME in ('utf8', 'utf8mb3') and TABLE_SCHEMA not in ('sys', 'performance_schema', 'information_schema', 'mysql')
SELECT TABLE_SCHEMA, TABLE_NAME, 'Table name used in mysql schema in 8.0' as WARNING FROM INFORMATION_SCHEMA.TABLES WHERE LOWER(TABLE_SCHEMA) = 'mysql' and LOWER(TABLE_NAME) IN ('catalogs', 'character_sets', 'collations', 'column_type_elements', 'columns', 'dd_properties', 'events', 'foreign_key_column_usage', 'foreign_keys', 'index_column_usage', 'index_partitions', 'index_stats', 'indexes', 'parameter_type_elements', 'parameters', 'routines', 'schemata', 'st_spatial_reference_systems', 'table_partition_values', 'table_partitions', 'table_stats', 'tables', 'tablespace_files', 'tablespaces', 'triggers', 'view_routine_usage', 'view_table_usage', 'component', 'default_roles', 'global_grants', 'innodb_ddl_log', 'innodb_dynamic_metadata', 'password_history', 'role_edges')
select table_schema, table_name, concat(engine, ' engine does not support native partitioning') from information_schema.Tables where create_options like '%partitioned%' and upper(engine) not in ('INNODB', 'NDB', 'NDBCLUSTER')
select table_schema, table_name, 'Foreign key longer than 64 characters' as description from information_schema.tables where table_name in (select left(substr(id,instr(id,'/')+1), instr(substr(id,instr(id,'/')+1),'_ibfk_')-1) from information_schema.innodb_sys_foreign where length(substr(id,instr(id,'/')+1))>64)
select routine_schema, routine_name, concat(routine_type, ' uses obsolete MAXDB sql_mode') from information_schema.routines where find_in_set('MAXDB', sql_mode)
select event_schema, event_name, 'EVENT uses obsolete MAXDB sql_mode' from information_schema.EVENTS where find_in_set('MAXDB', sql_mode)
select trigger_schema, trigger_name, 'TRIGGER uses obsolete MAXDB sql_mode' from information_schema.TRIGGERS where find_in_set('MAXDB', sql_mode)
select concat('global system variable ', variable_name), 'defined using obsolete MAXDB option' as reason from performance_schema.global_variables where variable_name = 'sql_mode' and find_in_set('MAXDB', variable_value)
select routine_schema, routine_name, concat(routine_type, ' uses obsolete DB2 sql_mode') from information_schema.routines where find_in_set('DB2', sql_mode)
select event_schema, event_name, 'EVENT uses obsolete DB2 sql_mode' from information_schema.EVENTS where find_in_set('DB2', sql_mode)
select trigger_schema, trigger_name, 'TRIGGER uses obsolete DB2 sql_mode' from information_schema.TRIGGERS where find_in_set('DB2', sql_mode)
select concat('global system variable ', variable_name), 'defined using obsolete DB2 option' as reason from performance_schema.global_variables where variable_name = 'sql_mode' and find_in_set('DB2', variable_value)
select routine_schema, routine_name, concat(routine_type, ' uses obsolete MSSQL sql_mode') from information_schema.routines where find_in_set('MSSQL', sql_mode)
select event_schema, event_name, 'EVENT uses obsolete MSSQL sql_mode' from information_schema.EVENTS where find_in_set('MSSQL', sql_mode)
select trigger_schema, trigger_name, 'TRIGGER uses obsolete MSSQL sql_mode' from information_schema.TRIGGERS where find_in_set('MSSQL', sql_mode)
select concat('global system variable ', variable_name), 'defined using obsolete MSSQL option' as reason from performance_schema.global_variables where variable_name = 'sql_mode' and find_in_set('MSSQL', variable_value)
select routine_schema, routine_name, concat(routine_type, ' uses obsolete MYSQL323 sql_mode') from information_schema.routines where find_in_set('MYSQL323', sql_mode)
select event_schema, event_name, 'EVENT uses obsolete MYSQL323 sql_mode' from information_schema.EVENTS where find_in_set('MYSQL323', sql_mode)
select trigger_schema, trigger_name, 'TRIGGER uses obsolete MYSQL323 sql_mode' from information_schema.TRIGGERS where find_in_set('MYSQL323', sql_mode)
select concat('global system variable ', variable_name), 'defined using obsolete MYSQL323 option' as reason from performance_schema.global_variables where variable_name = 'sql_mode' and find_in_set('MYSQL323', variable_value)
select routine_schema, routine_name, concat(routine_type, ' uses obsolete MYSQL40 sql_mode') from information_schema.routines where find_in_set('MYSQL40', sql_mode)
select event_schema, event_name, 'EVENT uses obsolete MYSQL40 sql_mode' from information_schema.EVENTS where find_in_set('MYSQL40', sql_mode)
select trigger_schema, trigger_name, 'TRIGGER uses obsolete MYSQL40 sql_mode' from information_schema.TRIGGERS where find_in_set('MYSQL40', sql_mode)
select concat('global system variable ', variable_name), 'defined using obsolete MYSQL40 option' as reason from performance_schema.global_variables where variable_name = 'sql_mode' and find_in_set('MYSQL40', variable_value)
select routine_schema, routine_name, concat(routine_type, ' uses obsolete NO_AUTO_CREATE_USER sql_mode') from information_schema.routines where find_in_set('NO_AUTO_CREATE_USER', sql_mode)
select event_schema, event_name, 'EVENT uses obsolete NO_AUTO_CREATE_USER sql_mode' from information_schema.EVENTS where find_in_set('NO_AUTO_CREATE_USER', sql_mode)
select trigger_schema, trigger_name, 'TRIGGER uses obsolete NO_AUTO_CREATE_USER sql_mode' from information_schema.TRIGGERS where find_in_set('NO_AUTO_CREATE_USER', sql_mode)
select concat('global system variable ', variable_name), 'defined using obsolete NO_AUTO_CREATE_USER option' as reason from performance_schema.global_variables where variable_name = 'sql_mode' and find_in_set('NO_AUTO_CREATE_USER', variable_value)
select routine_schema, routine_name, concat(routine_type, ' uses obsolete NO_FIELD_OPTIONS sql_mode') from information_schema.routines where find_in_set('NO_FIELD_OPTIONS', sql_mode)
select event_schema, event_name, 'EVENT uses obsolete NO_FIELD_OPTIONS sql_mode' from information_schema.EVENTS where find_in_set('NO_FIELD_OPTIONS', sql_mode)
select trigger_schema, trigger_name, 'TRIGGER uses obsolete NO_FIELD_OPTIONS sql_mode' from information_schema.TRIGGERS where find_in_set('NO_FIELD_OPTIONS', sql_mode)
select concat('global system variable ', variable_name), 'defined using obsolete NO_FIELD_OPTIONS option' as reason from performance_schema.global_variables where variable_name = 'sql_mode' and find_in_set('NO_FIELD_OPTIONS', variable_value)
select routine_schema, routine_name, concat(routine_type, ' uses obsolete NO_KEY_OPTIONS sql_mode') from information_schema.routines where find_in_set('NO_KEY_OPTIONS', sql_mode)
select event_schema, event_name, 'EVENT uses obsolete NO_KEY_OPTIONS sql_mode' from information_schema.EVENTS where find_in_set('NO_KEY_OPTIONS', sql_mode)
select trigger_schema, trigger_name, 'TRIGGER uses obsolete NO_KEY_OPTIONS sql_mode' from information_schema.TRIGGERS where find_in_set('NO_KEY_OPTIONS', sql_mode)
select concat('global system variable ', variable_name), 'defined using obsolete NO_KEY_OPTIONS option' as reason from performance_schema.global_variables where variable_name = 'sql_mode' and find_in_set('NO_KEY_OPTIONS', variable_value)
select routine_schema, routine_name, concat(routine_type, ' uses obsolete NO_TABLE_OPTIONS sql_mode') from information_schema.routines where find_in_set('NO_TABLE_OPTIONS', sql_mode)
select event_schema, event_name, 'EVENT uses obsolete NO_TABLE_OPTIONS sql_mode' from information_schema.EVENTS where find_in_set('NO_TABLE_OPTIONS', sql_mode)
select trigger_schema, trigger_name, 'TRIGGER uses obsolete NO_TABLE_OPTIONS sql_mode' from information_schema.TRIGGERS where find_in_set('NO_TABLE_OPTIONS', sql_mode)
select concat('global system variable ', variable_name), 'defined using obsolete NO_TABLE_OPTIONS option' as reason from performance_schema.global_variables where variable_name = 'sql_mode' and find_in_set('NO_TABLE_OPTIONS', variable_value)
select routine_schema, routine_name, concat(routine_type, ' uses obsolete ORACLE sql_mode') from information_schema.routines where find_in_set('ORACLE', sql_mode)
select event_schema, event_name, 'EVENT uses obsolete ORACLE sql_mode' from information_schema.EVENTS where find_in_set('ORACLE', sql_mode)
select trigger_schema, trigger_name, 'TRIGGER uses obsolete ORACLE sql_mode' from information_schema.TRIGGERS where find_in_set('ORACLE', sql_mode)
select concat('global system variable ', variable_name), 'defined using obsolete ORACLE option' as reason from performance_schema.global_variables where variable_name = 'sql_mode' and find_in_set('ORACLE', variable_value)
select routine_schema, routine_name, concat(routine_type, ' uses obsolete POSTGRESQL sql_mode') from information_schema.routines where find_in_set('POSTGRESQL', sql_mode)
select event_schema, event_name, 'EVENT uses obsolete POSTGRESQL sql_mode' from information_schema.EVENTS where find_in_set('POSTGRESQL', sql_mode)
select trigger_schema, trigger_name, 'TRIGGER uses obsolete POSTGRESQL sql_mode' from information_schema.TRIGGERS where find_in_set('POSTGRESQL', sql_mode)
select concat('global system variable ', variable_name), 'defined using obsolete POSTGRESQL option' as reason from performance_schema.global_variables where variable_name = 'sql_mode' and find_in_set('POSTGRESQL', variable_value)
select TABLE_SCHEMA, TABLE_NAME, COLUMN_NAME, UPPER(DATA_TYPE), COLUMN_TYPE, CHARACTER_MAXIMUM_LENGTH from information_schema.columns where data_type in ('enum','set') and CHARACTER_MAXIMUM_LENGTH > 255 and table_schema not in ('information_schema')
SELECT TABLE_SCHEMA, TABLE_NAME, concat('Partition ', PARTITION_NAME, ' is in shared tablespace ', TABLESPACE_NAME) as description FROM information_schema.PARTITIONS WHERE PARTITION_NAME IS NOT NULL AND (TABLESPACE_NAME IS NOT NULL AND TABLESPACE_NAME!='innodb_file_per_table')
SELECT tablespace_name, concat('circular reference in datafile path: \'', file_name, '\'') FROM INFORMATION_SCHEMA.FILES where file_type='TABLESPACE' and (file_name rlike '[^\\.]/\\.\\./' or file_name rlike '[^\\.]\\\\\\.\\.\\\\')
select table_schema, table_name, '', 'VIEW', UPPER(view_definition) from information_schema.views where table_schema not in ('performance_schema','information_schema','sys','mysql')
select routine_schema, routine_name, '', routine_type, UPPER(routine_definition) from information_schema.routines where routine_schema not in ('performance_schema','information_schema','sys','mysql')
select TABLE_SCHEMA,TABLE_NAME,COLUMN_NAME, 'COLUMN', UPPER(GENERATION_EXPRESSION) from information_schema.columns where extra regexp 'generated' and table_schema not in ('performance_schema','information_schema','sys','mysql')
select TRIGGER_SCHEMA, TRIGGER_NAME, '', 'TRIGGER', UPPER(ACTION_STATEMENT) from information_schema.triggers where TRIGGER_SCHEMA not in ('performance_schema','information_schema','sys','mysql')
select event_schema, event_name, '', 'EVENT', UPPER(EVENT_DEFINITION) from information_schema.events where event_schema not in ('performance_schema','information_schema','sys','mysql')
select table_schema, table_name, 'VIEW', UPPER(view_definition) from information_schema.views where table_schema not in ('performance_schema','information_schema','sys','mysql') and (UPPER(view_definition) like '%ASC%' or UPPER(view_definition) like '%DESC%')
select routine_schema, routine_name, routine_type, UPPER(routine_definition) from information_schema.routines where routine_schema not in ('performance_schema','information_schema','sys','mysql') and (UPPER(routine_definition) like '%ASC%' or UPPER(routine_definition) like '%DESC%')
select TRIGGER_SCHEMA, TRIGGER_NAME, 'TRIGGER', UPPER(ACTION_STATEMENT) from information_schema.triggers where TRIGGER_SCHEMA not in ('performance_schema','information_schema','sys','mysql') and (UPPER(ACTION_STATEMENT) like '%ASC%' or UPPER(ACTION_STATEMENT) like '%DESC%')
select event_schema, event_name, 'EVENT', UPPER(EVENT_DEFINITION) from information_schema.events where event_schema not in ('performance_schema','information_schema','sys','mysql') and (UPPER(event_definition) like '%ASC%' or UPPER(event_definition) like '%DESC%')
select 'global.sql_mode', 'does not contain either NO_ZERO_DATE or NO_ZERO_IN_DATE which allows insertion of zero dates' from (SELECT @@global.sql_mode like '%NO_ZERO_IN_DATE%' and @@global.sql_mode like '%NO_ZERO_DATE%' as zeroes_enabled) as q where q.zeroes_enabled = 0
select 'session.sql_mode', concat(' of ', q.thread_count, ' session(s) does not contain either NO_ZERO_DATE or NO_ZERO_IN_DATE which allows insertion of zero dates') FROM (select count(thread_id) as thread_count from performance_schema.variables_by_thread WHERE variable_name = 'sql_mode' and (variable_value not like '%NO_ZERO_IN_DATE%' or variable_value not like '%NO_ZERO_DATE%')) as q where q.thread_count > 0
select TABLE_SCHEMA, TABLE_NAME, COLUMN_NAME, concat('column has zero default value: ', COLUMN_DEFAULT) from information_schema.columns where TABLE_SCHEMA not in ('performance_schema','information_schema','sys','mysql') and DATA_TYPE in ('timestamp', 'datetime', 'date') and COLUMN_DEFAULT like '0000-00-00%'
select A.schema_name, A.table_name, 'present in INFORMATION_SCHEMA''s INNODB_SYS_TABLES table but missing from TABLES table' from (select distinct replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(substring_index(NAME, '/',1), '@002d', '-'), '@003a', ':'), '@002e', '.'), '@0024', '$'), '@0021', '!'), '@003f', '?'), '@0025', '%'), '@0023', '#'), '@0026', '&'), '@002a', '*'), '@0040', '@')  as schema_name, replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(substring_index(substring_index(NAME, '/',-1),'#',1), '@002d', '-'), '@003a', ':'), '@002e', '.'), '@0024', '$'), '@0021', '!'), '@003f', '?'), '@0025', '%'), '@0023', '#'), '@0026', '&'), '@002a', '*'), '@0040', '@')  as table_name from information_schema.innodb_sys_tables where NAME like '%/%') A left join information_schema.tables I on A.table_name = I.table_name and A.schema_name = I.table_schema where A.table_name not like 'FTS_0%' and (I.table_name IS NULL or I.table_schema IS NULL) and A.table_name not REGEXP '@[0-9]' and A.schema_name not REGEXP '@[0-9]'
select a.table_schema, a.table_name, concat('recognized by the InnoDB engine but belongs to ', a.engine) from information_schema.tables a join (select replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(substring_index(NAME, '/',1), '@002d', '-'), '@003a', ':'), '@002e', '.'), '@0024', '$'), '@0021', '!'), '@003f', '?'), '@0025', '%'), '@0023', '#'), '@0026', '&'), '@002a', '*'), '@0040', '@')  as table_schema, replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(substring_index(substring_index(NAME, '/',-1),'#',1), '@002d', '-'), '@003a', ':'), '@002e', '.'), '@0024', '$'), '@0021', '!'), '@003f', '?'), '@0025', '%'), '@0023', '#'), '@0026', '&'), '@002a', '*'), '@0040', '@')  as table_name from information_schema.innodb_sys_tables where NAME like '%/%') b on a.table_schema = b.table_schema and a.table_name = b.table_name where a.engine != 'Innodb'
FLUSH LOCAL TABLES
SELECT TABLE_SCHEMA, TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA not in ('information_schema', 'performance_schema', 'sys')
CHECK TABLE `mysql`.`columns_priv` FOR UPGRADE
CHECK TABLE `mysql`.`db` FOR UPGRADE
CHECK TABLE `mysql`.`engine_cost` FOR UPGRADE
CHECK TABLE `mysql`.`event` FOR UPGRADE
CHECK TABLE `mysql`.`func` FOR UPGRADE
CHECK TABLE `mysql`.`general_log` FOR UPGRADE
CHECK TABLE `mysql`.`gtid_executed` FOR UPGRADE
CHECK TABLE `mysql`.`help_category` FOR UPGRADE
CHECK TABLE `mysql`.`help_keyword` FOR UPGRADE
CHECK TABLE `mysql`.`help_relation` FOR UPGRADE
CHECK TABLE `mysql`.`help_topic` FOR UPGRADE
CHECK TABLE `mysql`.`innodb_index_stats` FOR UPGRADE
CHECK TABLE `mysql`.`innodb_table_stats` FOR UPGRADE
CHECK TABLE `mysql`.`ndb_binlog_index` FOR UPGRADE
CHECK TABLE `mysql`.`plugin` FOR UPGRADE
CHECK TABLE `mysql`.`proc` FOR UPGRADE
CHECK TABLE `mysql`.`procs_priv` FOR UPGRADE
CHECK TABLE `mysql`.`proxies_priv` FOR UPGRADE
CHECK TABLE `mysql`.`server_cost` FOR UPGRADE
CHECK TABLE `mysql`.`servers` FOR UPGRADE
CHECK TABLE `mysql`.`slave_master_info` FOR UPGRADE
CHECK TABLE `mysql`.`slave_relay_log_info` FOR UPGRADE
CHECK TABLE `mysql`.`slave_worker_info` FOR UPGRADE
CHECK TABLE `mysql`.`slow_log` FOR UPGRADE
CHECK TABLE `mysql`.`tables_priv` FOR UPGRADE
CHECK TABLE `mysql`.`time_zone` FOR UPGRADE
CHECK TABLE `mysql`.`time_zone_leap_second` FOR UPGRADE
CHECK TABLE `mysql`.`time_zone_name` FOR UPGRADE
CHECK TABLE `mysql`.`time_zone_transition` FOR UPGRADE
CHECK TABLE `mysql`.`time_zone_transition_type` FOR UPGRADE
CHECK TABLE `mysql`.`user` FOR UPGRADE

check.txt

Cannot set LC_ALL to locale en_US.UTF-8: No such file or directory
WARNING: Using a password on the command line interface can be insecure.
The MySQL server at 172.17.0.3:3306, version 5.7.33 - MySQL Community Server
(GPL), will now be checked for compatibility issues for upgrade to MySQL
8.0.24...

1) Usage of old temporal type
  No issues found

2) Usage of db objects with names conflicting with new reserved keywords
  No issues found

3) Usage of utf8mb3 charset
  No issues found

4) Table names in the mysql schema conflicting with new tables in 8.0
  No issues found

5) Partitioned tables using engines with non native partitioning
  No issues found

6) Foreign key constraint names longer than 64 characters
  No issues found

7) Usage of obsolete MAXDB sql_mode flag
  No issues found

8) Usage of obsolete sql_mode flags
  Notice: The following DB objects have obsolete options persisted for
    sql_mode, which will be cleared during upgrade to 8.0.
  More information:

https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals

  global system variable sql_mode - defined using obsolete NO_AUTO_CREATE_USER
    option

9) ENUM/SET column definitions containing elements longer than 255 characters
  No issues found

10) Usage of partitioned tables in shared tablespaces
  No issues found

11) Circular directory references in tablespace data file paths
  No issues found

12) Usage of removed functions
  No issues found

13) Usage of removed GROUP BY ASC/DESC syntax
  No issues found

14) Removed system variables for error logging to the system log configuration
  To run this check requires full path to MySQL server configuration file to be specified at 'configPath' key of options dictionary
  More information:

https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-13.html#mysqld-8-0-13-logging

15) Removed system variables
  To run this check requires full path to MySQL server configuration file to be specified at 'configPath' key of options dictionary
  More information:

https://dev.mysql.com/doc/refman/8.0/en/added-deprecated-removed.html#optvars-removed

16) System variables with new default values
  To run this check requires full path to MySQL server configuration file to be specified at 'configPath' key of options dictionary
  More information:

https://mysqlserverteam.com/new-defaults-in-mysql-8-0/

17) Zero Date, Datetime, and Timestamp values
  No issues found

18) Schema inconsistencies resulting from file removal or corruption
  No issues found

19) Tables recognized by InnoDB that belong to a different engine
  No issues found

20) Issues reported by 'check table x for upgrade' command
  No issues found

21) New default authentication plugin considerations
  Warning: The new default authentication plugin 'caching_sha2_password' offers
    more secure password hashing than previously used 'mysql_native_password'
    (and consequent improved client connection authentication). However, it also
    has compatibility implications that may affect existing MySQL installations.
    If your MySQL installation must serve pre-8.0 clients and you encounter
    compatibility issues after upgrading, the simplest way to address those
    issues is to reconfigure the server to revert to the previous default
    authentication plugin (mysql_native_password). For example, use these lines
    in the server option file:

    [mysqld]
    default_authentication_plugin=mysql_native_password

    However, the setting should be viewed as temporary, not as a long term or
    permanent solution, because it causes new accounts created with the setting
    in effect to forego the improved authentication security.
    If you are using replication please take time to understand how the
    authentication plugin changes may impact you.
  More information:

https://dev.mysql.com/doc/refman/8.0/en/upgrading-from-previous-series.html#upgrade-caching-sha2-password-compatibility-issues


https://dev.mysql.com/doc/refman/8.0/en/upgrading-from-previous-series.html#upgrade-caching-sha2-password-replication

Errors:   0
Warnings: 1
Notices:  1

No fatal errors were found that would prevent an upgrade, but some potential issues were detected. Please ensure that the reported issues are not significant before upgrading.

The pre-pre SQL check

I now am armed with an simplified single SQL statement. It does of course take a long to run in a cluster with thousands of tables.

select A.schema_name, A.table_name, 
       'present in INFORMATION_SCHEMA''s INNODB_SYS_TABLES table but missing from TABLES table' 
from (select distinct replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(substring_index(NAME, '/',1), '@002d', '-'), '@003a', ':'), '@002e', '.'), '@0024', '$'), '@0021', '!'), '@003f', '?'), '@0025', '%'), '@0023', '#'), '@0026', '&'), '@002a', '*'), '@0040', '@')  as schema_name, 
replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(substring_index(substring_index(NAME, '/',-1),'#',1), '@002d', '-'), '@003a', ':'), '@002e', '.'), '@0024', '$'), '@0021', '!'), '@003f', '?'), '@0025', '%'), '@0023', '#'), '@0026', '&'), '@002a', '*'), '@0040', '@')  as table_name
 from information_schema.innodb_sys_tables 
where NAME like '%/%') A 
left join information_schema.tables I on A.table_name = I.table_name and A.schema_name = I.table_schema 
where A.table_name not like 'FTS_0%' 
and (I.table_name IS NULL or I.table_schema IS NULL) 
and A.table_name not REGEXP '@[0-9]' 
and A.schema_name not REGEXP '@[0-9]')

I then performed a number of drop/remove/restart/re-create/discard tablespace steps with no success. As a managed service RDS the only course of action now is to open an AWS Support ticket for help with this specific internal corruption.

Upgrading to AWS Aurora MySQL 8

With Aurora MySQL 8 now generally available to all, you may want to consider the plan for an upgrade path if you would like to take advantage of the new features for your application, for example, Common Table Expressions (CTE). This new major release has a much improved and streamlined upgrade progress from Aurora MySQL 5.7.

This tutorial will provide all the steps to allow you to try out setting up an Aurora cluster and performing an upgrade without the impact on your existing AWS environment. The two pre-requisites to getting started are:

You can find all the CLI cut/paste commands in my AWS Tutorials repo. This will lead you through all of the various AWS dependencies for a successful RDS Aurora cluster including IAM, KMS, VPC and EC2 requirements.

Create an RDS Aurora MySQL Cluster and Aurora MySQL Major upgrade – Aurora 2.x to Aurora 3.x can provide you with a POC of the primary operations path to achieving the goal of this post in under 30 minutes.

While this example will produce an upgraded cluster with some warnings, in real life a more detailed upgrade assessment is needed for any new version of software. The MySQL and Aurora pre-checks can be performed to minimize surprises during the final process of your data migration.

mysqlcheck –check-upgrade and the mysqlsh util.checkForServerUpgrade() pre-checks can help to assist in being prepared and not have your Cluster instances with the incompatible-parameters status. At this point download the upgrade-prechecks.log Aurora Log and trash your cluster and instance. They are unusable. Reviewing the upgrade-prechecks.log can contain more information than mysqlsh util.checkForServerUpgrade() output.

With an Aurora cluster configured with an instance parameter group enabling MySQL binary log replication, it is easy to have a functioning Aurora 5.7 Cluster with real-time replication to an Aurora 8 Cluster to minimize any downtime in your production environment and then benefit from an atomic data dictionary, roles, descending indexes, improved internal temporary table, additional JSON functions, Window Functions, CTEs and more!

More Reading

AWS Aurora MySQL 8 is now generally available

AWS has just announced the general availability of Aurora MySQL 8 compatibility (known as Aurora Version 3). This is long awaited addition to RDS MySQL 8 and provides many of the new features that can be found in the open-source MySQL 8 community version.

For those unfamiliar with Amazon Aurora my Understanding AWS RDS Aurora Capabilities presentation from Percona Live 2021 provides a great introduction of the benefits of this managed service.

There is a lot to digest and the Aurora User Guide provides details of the new features from the MySQL 8 community version, and of Aurora 3 new features, and feature differences or unsupported features. This AWS blog post also provides a general introduction.

It is very easy to spin up a new Aurora MySQL 3.01.0 cluster in an existing environment containing existing Aurora clusters. After defining new cluster and instance parameter groups for the aurora-mysql8.0 family, or starting with the available default.aurora-mysql8.0 parameter groups, there are no other differences in aws rds create-db-cluster syntax, or using the AWS Console or Terraform syntax for example.

Before considering a migration of an existing Aurora cluster, there is a lot of information around parameter changes (including inclusive language functionality), and those related status and CloudWatch Metrics changes. Yes, looking at the 29 ‘Removed from Aurora MySQL version 3′, 30 ‘This parameter applies to Aurora MySQL version 3 and higher’ and presently ‘Currently not available in Aurora MySQL version 3′ LOAD|SELECT S3 capabilities is important. There are new reserved words to be aware of, you will need to note how to take advantage of roles within the Aurora permissions model.

Migrating an existing Aurora MySQL 2 cluster to Aurora 3 is a little more involved than specifying the snapshot-id. Seeing your restored Aurora 2 snapshot in an Aurora 3 cluster but with a status of incompatible-parameters is a good indication that more work is needed. While I will detail some of my experiences in a subsequent post, one helpful tip is found in those additional pages of the 5 rows of logs for your new cluster after all the error.log files, you will find an upgrade-prechecks.log file. This contains an extensive list of checks and warnings performed for the upgrade. Skipping to the end of the JSON will give you an idea of your errorCount, warningCount and noticeCount.

Searching then for an object of “status”: “ERROR” will find the errorCount entries matching the count. Several other checks provide a “detectedProblems” section and a “level”: “Error” which would seem to be needed to be also corrected. There are a lot of checks between the INFORMATION_SCHEMA, InnoDB internal data dictionary and actual data/files on disk. You will also be presented with a nice long list of tables/columns using reserved words, as well as character set deprecations.

At a more technical glance of the documentation, there is a key change in how internal temporary tables are created, and how this differs from writer and reader instances. Benchmarking your application in different configurations will definitely be recommended.

Restoring an Aurora 2 cluster into Aurora 3 also took significantly more time; many hours; than a simple restore-db-cluster-from-snapshot you may be used to. While Terraform cluster creation timeouts need to be increased for global clusters, this time the default 1h30 timeout for an instance was also exceeded.

While different features will benefit different consumers of your Aurora database, one of the most anticipated is CTEs. From the operations perspective, as a managed service Aurora offers a subset of community features. One great feature that is now available in Aurora 3 is binary log filtering, a simple long-lived option in MySQL land that will help replacing more complex functionality.

This is a great and long awaited version release for Aurora.

A QLDB Cheat Sheet for MySQL Users

The AWS ledger database (QLDB) is an auditors best friend and lives up to the stated description of “Amazon QLDB can be used to track each and every application data change and maintains a complete and verifiable history of changes over time.”

This presentation will go over what was done to take a MySQL application that provided auditing activity changes for key data, and how it is being migrated to QLDB.

While QLDB does use a SQL-format for DML (PartiQL), and you can perform the traditional INSERT/UPDATE/DELETE/SELECT, the ability to extend these statements to manipulate Amazon Ion data (a superset of JSON) gives you improved capabilities and statements.

Get a comparison of how to map a MySQL structure multiple tables and lots of columns into a single QLDB table and then benefit with an immutable and cryptographically verifiable transaction log. No more triggers, duplicated tables, extra auditing for abuse of binary log activity.

We also cover the simplicity of using X Protocol and JSON output for data migration, and the complexity of AWS RDS not supporting X Protocol.

Understanding AWS RDS Aurora Capabilities

The RDS Aurora MySQL/PostgreSQL capabilities of AWS extend the High Availability (HA) capabilities of RDS read replicas and Multi-AZ. In this presentation I discuss the different capabilities and HA configurations with RDS Aurora including:

  • RDS Aurora Cluster single instance
  • RDS Aurora Cluster multiple instances (writer + 1 or more readers)
  • RDS Aurora Cluster multi-master
  • RDS Aurora Global Cluster
  • RDS Aurora Cluster options for multi-regions

Each option has its relative merits and limitations. Each will depend on your business requirements, global needs and budget.

#WDILTW – Functions with options

In the late 1990s I learned MySQL and Java at approximately the same time. How did I teach myself? For MySQL I read the online MySQL manual cover to cover. For Java it was the Java Language Specification or Java Programming Language book, again cover to cover. Then for record I read Effective Java, and I was totally lost. I read it three or four years later and then it made sense.

At that time, with MySQL 3.22/3.23 the scope of the product was smaller, so was Java 1.2. I am confident I have forgotten as much as I retain, however it does marvel me when sometimes the most simplest of functionality I do not recall, or perhaps never learned. Today’s What Did I Learn This Week.

In MySQL, there is a TRIM() function, as the name suggests it trims whitespace, or so I thought. It actually does a lot more as the current MySQL 8.0 manual page states.

You can for example, trim the white space just LEADING, or just TRAILING, and in fact you can trim any pattern of characters, LEADING, TRAILING or BOTH. I just did not know that.

(from the man page)

mysql> SELECT TRIM('  bar   ');
        -> 'bar'
mysql> SELECT TRIM(LEADING 'x' FROM 'xxxbarxxx');
        -> 'barxxx'
mysql> SELECT TRIM(BOTH 'x' FROM 'xxxbarxxx');
        -> 'bar'
mysql> SELECT TRIM(TRAILING 'xyz' FROM 'barxxyz');
        -> 'barx'

For the record, the products are both widely used more than 20 years after I first started. MySQL has gone from 3.22 to 3.23, 4.0, 4.1, 5.0, 5.1 (the defunct 5.4 and even a 6.0 I think), 5.5, 5.6, 5.7 and now 8.0. Java was SE 1.2, then 1.3, 1.4, 5.0, 6, 7, 8 (my last major version), 9, 10, 11, 12, 13, 14, 15 and now 16, with 17 in the books.

I guess it’s never to late to re-read the manual.

Upcoming Percona Live 2021 Presentations

I am pleased to have been selected to present at Percona Live 2021 May 12-13. My presentations include talks on AWS RDS Aurora and QLDB managed services.

Understanding AWS RDS Aurora Capabilities

The RDS Aurora MySQL/PostgreSQL capabilities of AWS extend the HA capabilities of RDS read replicas and Multi-AZ.

In this presentation we will discuss the different capabilities and HA configurations with RDS Aurora including:

* RDS Cluster single instance
* RDS Cluster multiple instances (writer + 1 or more readers)
* RDS Cluster multi-master
* RDS Global Cluster
* RDS Cluster options for multi-regions

Each option has its relative merits and limitations. Each will depend on your business requirements, global needs and budget.

This presentation will include setup, monitoring and failover evaluations for the attendee with the goal to provide a feature matrix of when/how to consider each option as well as provide some details of the subtle differences Aurora provides.

This presentation is not going to go into the technical details of RDS Aurora’s underlying infrastructure or a feature by feature comparison of AWS RDS to AWS RDS Aurora.

A QLDB Cheatsheet for MySQL Users

Amazons new ledger database (QLDB) is an auditors best friend and lives up to the stated description of “Amazon QLDB can be used to track each and every application data change and maintains a complete and verifiable history of changes over time.”

This presentation will go over what was done to take a MySQL application that provided auditing activity changes for key data, and how it is being migrated to QLDB.

While QLDB does use a SQL-format for DML, and you can perform the traditional INSERT/UPDATE/DELETE/SELECT. The ability to extend these statements to manipulate Amazon Ion data (a superset of JSON) gives you improved data manipulation, and for example the FROM SQL statement.

Get a blow by blow comparison of MySQL structures (multiple tables and lots of columns) and SQL converted into a single QLDB table, with immutable, and cryptographically verifiable transaction log. No more triggers, duplicated tables, extra auditing for abuse of binary log activity.

We also cover the simplicity of using X Protocol and JSON output for data migration, and the complexity of AWS RDS not supporting X Protocol

#WDILTW – Creating examples can be hard

This week I was evaluating AWS QLDB. Specifically the verifiable history of changes to determine how to simplify present processes that perform auditing via CDC. This is not the first time I have looked at QLDB so there was nothing that new to learn.

What I found was that creating a workable solution with an existing application is hard. Even harder is creating an example to publish in this blog (and the purpose of this post).

First some background.

Using MySQL as the source of information, how can you leverage QLDB? It’s easy to stream data from MySQL Aurora, and it’s easy to stream data from QLDB, but it not that easy to place real-time data into QLDB. AWS DMS is a good way to move data from a source to a target, previously my work has included MySQL to MySQL, MySQL to Redshift, and MySQL to Kinesis, however there is no QLDB target.

Turning the problem upside down, and using QLDB as the source of information, and streaming to MySQL for compatibility seemed a way forward.

After setting up the QLDB Ledger and an example table, it was time to populate with existing data. The documented reference example looked very JSON compatible. Side bar, it is actually Amazon Ion a superset of JSON.

INSERT INTO Person
<< {
    'FirstName' : 'Raul',
    'LastName' : 'Lewis',
    'DOB' : `1963-08-19T`,
    'GovId' : 'LEWISR261LL',
    'GovIdType' : 'Driver License',
    'Address' : '1719 University Street, Seattle, WA, 98109'
},
{
    'FirstName' : 'Brent',
    'LastName' : 'Logan',
    'DOB' : `1967-07-03T`,
    'GovId' : 'LOGANB486CG',
    'GovIdType' : 'Driver License',
    'Address' : '43 Stockert Hollow Road, Everett, WA, 98203'
}

Now, MySQL offers with the X Protocol. This is something that lefred has evangelized for many years, I have seen presented many times, but finally I had a chance to use. The MySQL Shell JSON output looked ideal.

{
    "ID": 1523,
    "Name": "Wien",
    "CountryCode": "AUT",
    "District": "Wien",
    "Info": {
        "Population": 1608144
    }
}
{
    "ID": 1524,
    "Name": "Graz",
    "CountryCode": "AUT",
    "District": "Steiermark",
    "Info": {
        "Population": 240967
    }
}

And now, onto some of the things I learned this week.
Using AWS RDS Aurora MySQL is the first stumbling block, X Protocol is not supported. As this was a example, simple, mysqldump some reference data and load it into a MySQL 8 instance, and extract into JSON, so as to potentially emulate a pipeline.

Here is my experiences of trying to refactor into a demo to write up.

Launch a MySQL Docker container as per my standard notes. Harmless, right?

MYSQL_ROOT_PASSWORD="$(date | md5sum | cut -c1-20)#"
echo $MYSQL_ROOT_PASSWORD
docker run --name=qldb-mysql -p3306:3306 -v mysql-volume:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD -d mysql/mysql-server:latest
docker logs qldb-mysql
docker exec -it qldb-mysql /bin/bash

As it's a quick demo, I shortcut credentials to make using the mysql client easier. NOTE: as I always generate a new password each container, it's included here.

# echo "[mysql]
user=root
password='ab6ea7b0436cbc0c0d49#' > .my.cnf

# mysql 
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: NO)

What the? Did I make a mistake, I test manually and check

# mysql -u root -p

# cat .my.cnf

Nothing wrong there. Next check

# pwd
/
bash-4.2# grep root /etc/passwd
root:x:0:0:root:/root:/bin/bash
operator:x:11:0:operator:/root:/sbin/nologin

And there is the first Dockerism. I don't live in Docker, so these 101 learnings would be known. First I really thing using "root" by default is a horrible idea. And when you shell in, you are not dropped into the home directory? Solved, we move on.

# mv /.my.cnf /root/.my.cnf

Mock and example as quickly as I can think.

# mysql

mysql> create schema if not exists demo;
Query OK, 1 row affected (0.00 sec)

mysql> use demo;
Database changed
mysql> create table sample(id int unsigned not null auto_increment, name varchar(30) not null, location varchar(30) not null, domain varchar(50) null, primary key(id));
Query OK, 0 rows affected (0.03 sec)
mysql> show create table sample;

mysql> insert into sample values (null,'Demo Row','USA',null), (null,'Row 2','AUS','news.com.au'), (null,'Kiwi','NZ', null);
Query OK, 3 rows affected (0.00 sec)
Records: 3  Duplicates: 0  Warnings: 0

mysql> select * from sample;
+----+----------+----------+-------------+
| id | name     | location | domain      |
+----+----------+----------+-------------+
|  1 | Demo Row | USA      | NULL        |
|  2 | Row 2    | AUS      | news.com.au |
|  3 | Kiwi     | NZ       | NULL        |
+----+----------+----------+-------------+
3 rows in set (0.00 sec)

Cool, now to look at it in Javascript using MySQL Shell. Hurdle 2.

# mysqlsh
MySQL Shell 8.0.22

Copyright (c) 2016, 2020, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

 MySQL  JS > var session=mysqlx.getSession('root:ab6ea7b0436cbc0c0d49#@localhost')
mysqlx.getSession: Argument #1: Invalid URI: Illegal character [#] found at position 25 (ArgumentError)


What the, it doesn't like the password format. I'm not a Javascript person, and well this is an example for blogging, which is not what was actually setup, so do it the right way, create a user.

# mysql

mysql> create user demo@localhost identified by 'qldb';
Query OK, 0 rows affected (0.01 sec)

mysql> grant ALL ON sample.* to demo@localhost;
Query OK, 0 rows affected, 1 warning (0.01 sec)

mysql> SHOW GRANTS FOR demo@localhost;
+----------------------------------------------------------+
| Grants for demo@localhost                                |
+----------------------------------------------------------+
| GRANT USAGE ON *.* TO `demo`@`localhost`                 |
| GRANT ALL PRIVILEGES ON `sample`.* TO `demo`@`localhost` |
+----------------------------------------------------------+
2 rows in set (0.00 sec)

Back into the MySQL Shell, and hurdle 3.

MySQL  JS > var session=mysqlx.getSession('demo:qldb@localhost')
mysqlx.getSession: Access denied for user 'demo'@'127.0.0.1' (using password: YES) (MySQL Error 1045)

Did I create the creds wrong, verify. No my password is correct.

#  mysql -udemo -pqldb -e "SELECT NOW()"
mysql: [Warning] Using a password on the command line interface can be insecure.
+---------------------+
| NOW()               |
+---------------------+
| 2021-03-06 23:15:26 |
+---------------------+

I don't have time to debug this, User take 2.

mysql> drop user demo@localhost;
Query OK, 0 rows affected (0.00 sec)

mysql> create user demo@'%' identified by 'qldb';
Query OK, 0 rows affected (0.01 sec)

mysql> grant all on demo.* to demo@'%'
    -> ;
Query OK, 0 rows affected (0.00 sec)

mysql> show grants;
+--
| Grants for root@localhost                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
+---
| GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, SHUTDOWN, PROCESS, FILE, REFERENCES, INDEX, ALTER, SHOW DATABASES, SUPER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER, CREATE TABLESPACE, CREATE ROLE, DROP ROLE ON *.* TO `root`@`localhost` WITH GRANT OPTION                                                                                                                                                                                                                    |
| GRANT APPLICATION_PASSWORD_ADMIN,AUDIT_ADMIN,BACKUP_ADMIN,BINLOG_ADMIN,BINLOG_ENCRYPTION_ADMIN,CLONE_ADMIN,CONNECTION_ADMIN,ENCRYPTION_KEY_ADMIN,FLUSH_OPTIMIZER_COSTS,FLUSH_STATUS,FLUSH_TABLES,FLUSH_USER_RESOURCES,GROUP_REPLICATION_ADMIN,INNODB_REDO_LOG_ARCHIVE,INNODB_REDO_LOG_ENABLE,PERSIST_RO_VARIABLES_ADMIN,REPLICATION_APPLIER,REPLICATION_SLAVE_ADMIN,RESOURCE_GROUP_ADMIN,RESOURCE_GROUP_USER,ROLE_ADMIN,SERVICE_CONNECTION_ADMIN,SESSION_VARIABLES_ADMIN,SET_USER_ID,SHOW_ROUTINE,SYSTEM_USER,SYSTEM_VARIABLES_ADMIN,TABLE_ENCRYPTION_ADMIN,XA_RECOVER_ADMIN ON *.* TO `root`@`localhost` WITH GRANT OPTION |
| GRANT PROXY ON ''@'' TO 'root'@'localhost' WITH GRANT OPTION                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                |
+---
3 rows in set (0.00 sec)

mysql> show grants for demo@'%';
+--------------------------------------------------+
| Grants for demo@%                                |
+--------------------------------------------------+
| GRANT USAGE ON *.* TO `demo`@`%`                 |
| GRANT ALL PRIVILEGES ON `demo`.* TO `demo`@`%`   |
+--------------------------------------------------+
2 rows in set (0.00 sec)

Right, initially I showed grants of not new user, but note to self, I should checkout the MySQL 8 Improved grants. I wonder how RDS MySQL 8 handles these, and how Aurora MySQL 8 will (when it ever drops, that's another story).

Third try is a charm, so nice to also see queries with 0.0000 execution granularity.

 MySQL  JS > var session=mysqlx.getSession('demo:qldb@localhost')
 MySQL  JS > var sql='SELECT * FROM demo.sample'
 MySQL  JS > session.sql(sql)
+----+----------+----------+-------------+
| id | name     | location | domain      |
+----+----------+----------+-------------+
|  1 | Demo Row | USA      | NULL        |
|  2 | Row 2    | AUS      | news.com.au |
|  3 | Kiwi     | NZ       | NULL        |
+----+----------+----------+-------------+
3 rows in set (0.0006 sec)

Get that now in JSON output. NOTE: There are 3 different JSON formats, this matched what I needed.

bash-4.2# mysqlsh
MySQL Shell 8.0.22

Copyright (c) 2016, 2020, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.
 MySQL  JS > var session=mysqlx.getSession('demo:qldb@localhost')
 MySQL  JS > var sql='SELECT * FROM demo.sample'
 MySQL  JS > shell.options.set('resultFormat','json/array')
 MySQL  JS > session.sql(sql)
[
{"id":1,"name":"Demo Row","location":"USA","domain":null},
{"id":2,"name":"Row 2","location":"AUS","domain":"news.com.au"},
{"id":3,"name":"Kiwi","location":"NZ","domain":null}
]
3 rows in set (0.0006 sec)

Ok, that works in interactive interface, I need it scripted.

# vi
bash: vi: command not found
# yum install vi
Loaded plugins: ovl
http://repo.mysql.com/yum/mysql-connectors-community/el/7/x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 403 - Forbidden
Trying other mirror.
...

And another downer of Docker containers, other tools or easy ways to install them, again I want to focus on the actual example, and not all this preamble, so

# echo "var session=mysqlx.getSession('demo:qldb@localhost')
var sql='SELECT * FROM demo.sample'
shell.options.set('resultFormat','json/array')
session.sql(sql)" > dump.js


# mysqlsh < dump.js

What the? Hurdle 4. Did I typo this as well, I check the file, and cut/paste it and get what I expect.

# cat dump.js
var session=mysqlx.getSession('demo:qldb@localhost')
var sql='SELECT * FROM demo.sample'
shell.options.set('resultFormat','json/array')
session.sql(sql)
# mysqlsh
MySQL Shell 8.0.22

Copyright (c) 2016, 2020, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type '\help' or '\?' for help; '\quit' to exit.
 MySQL  JS > var session=mysqlx.getSession('demo:qldb@localhost')
 MySQL  JS > var sql='SELECT * FROM demo.sample'
 MySQL  JS > shell.options.set('resultFormat','json/array')
 MySQL  JS > session.sql(sql)
[
{"id":1,"name":"Demo Row","location":"USA","domain":null},
{"id":2,"name":"Row 2","location":"AUS","domain":"news.com.au"},
{"id":3,"name":"Kiwi","location":"NZ","domain":null}
]
3 rows in set (0.0022 sec)

This is getting crazy.

# echo '[
> {"id":1,"name":"Demo Row","location":"USA","domain":null},
> {"id":2,"name":"Row 2","location":"AUS","domain":"news.com.au"},
> {"id":3,"name":"Kiwi","location":"NZ","domain":null}
> ]' > sample.json
bash-4.2# jq . sample.json
bash: jq: command not found

Oh the docker!!!!. Switching back to my EC2 instance now.

$ echo '[
> {"id":1,"name":"Demo Row","location":"USA","domain":null},
> {"id":2,"name":"Row 2","location":"AUS","domain":"news.com.au"},
> {"id":3,"name":"Kiwi","location":"NZ","domain":null}
> ]' > sample.json
$ jq . sample.json
[
  {
    "id": 1,
    "name": "Demo Row",
    "location": "USA",
    "domain": null
  },
  {
    "id": 2,
    "name": "Row 2",
    "location": "AUS",
    "domain": "news.com.au"
  },
  {
    "id": 3,
    "name": "Kiwi",
    "location": "NZ",
    "domain": null
  }
]

I am now way of the time I would like to spend on this weekly post, and it's getting way to long, and I'm nowhere near showing what I actually want. Still we trek on.

Boy, this stock EC2 image uses version 1, we need I'm sure V2, and well command does not work!!!!

$  aws qldb list-ledgers
ERROR:
$ aws --version

$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
$ unzip awscliv2.zip
$ sudo ./aws/install
$ export PATH=/usr/local/bin:$PATH
$ aws --version

Can I finally get a ledger now.

$ aws qldb create-ledger --name demo --tags JIRA=DEMO-5826,Owner=RonaldBradford --permissions-mode ALLOW_ALL --no-deletion-protection
 
{
    "Name": "demo",
    "Arn": "arn:aws:qldb:us-east-1:999:ledger/demo",
    "State": "CREATING",
    "CreationDateTime": "2021-03-06T22:46:41.760000+00:00",
    "DeletionProtection": false
}

$  aws qldb list-ledgers

{
    "Ledgers": [
        {
            "Name": "xx",
            "State": "ACTIVE",
            "CreationDateTime": "2021-03-05T20:12:44.611000+00:00"
        },
        {
            "Name": "demo",
            "State": "ACTIVE",
            "CreationDateTime": "2021-03-06T22:46:41.760000+00:00"
        }
    ]
}

$ aws qldb describe-ledger --name demo
{
    "Name": "demo",
    "Arn": "arn:aws:qldb:us-east-1:999:ledger/demo",
    "State": "ACTIVE",
    "CreationDateTime": "2021-03-06T22:46:41.760000+00:00",
    "DeletionProtection": false
}

Oh the Python 2, and the lack of user packaging, more crud of getting an example.

$ pip install pyqldb==3.1.0
ERROR

$ echo "alias python=python3
alias pip=pip3" >> ~/.bash_profile
source ~/.bash_profile
$ pip --version
pip 9.0.3 from /usr/lib/python3.6/site-packages (python 3.6)

$ python --version
Python 3.6.8

$ pip install pyqldb==3.1.0

ERROR

$ sudo pip install pyqldb==3.1.0

Yeah!, after all that, my example code works and data is inserted.

$ cat demo.py
from pyqldb.config.retry_config import RetryConfig
from pyqldb.driver.qldb_driver import QldbDriver

# Configure retry limit to 3
retry_config = RetryConfig(retry_limit=3)

# Initialize the driver
print("Initializing the driver")
qldb_driver = QldbDriver("demo", retry_config=retry_config)


def create_table(transaction_executor, table):

    print("Creating table {}".format(table))
    transaction_executor.execute_statement("Create TABLE {}".format(table))

def create_index(transaction_executor, table, column):
    print("Creating index {}.{}".format(table, column))
    transaction_executor.execute_statement("CREATE INDEX ON {}({})".format(table,column))


def insert_record(transaction_executor, table, values):
    print("Inserting into {}".format(table))
    transaction_executor.execute_statement("INSERT INTO {} ?".format(table),  values)


table="sample"
column="id"
qldb_driver.execute_lambda(lambda executor: create_table(executor, table))
qldb_driver.execute_lambda(lambda executor: create_index(executor, table, column))


record1 = { 'id': "1",
            'name': "Demo Row",
            'location': "USA",
            'domain':  ""
        }

qldb_driver.execute_lambda(lambda x: insert_record(x, table, record1))
$ python demo.py
Initializing the driver
Creating table sample
Creating index sample.id
Inserting into sample

One vets in the AWS Console, but you cannot show that in text in this blog, so goes to find a simple client and there is qldbshell

What the? I installed it and it complains about pyqldb.driver.pooled_qldb_driver. I literally used that in the last example.

$ pip3 install qldbshell
Collecting qldbshell
  Downloading PermissionError: [Errno 13] Permission denied: '/usr/local/lib/python3.6/site-packages/amazon.ion-0.7.0-py3.6-nspkg.pth' -> '/tmp/pip-p8j4d45d-uninstall/usr/local/lib/python3.6/site-packages/amazon.ion-0.7.0-py3.6-nspkg.pth'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/pip/basecommand.py", line 215, in main
    status = self.run(options, args)
  File "/usr/lib/python3.6/site-packages/pip/commands/install.py", line 365, in run
    strip_file_prefix=options.strip_file_prefix,
  File "/usr/lib/python3.6/site-packages/pip/req/req_set.py", line 783, in install
    requirement.uninstall(auto_confirm=True)
  File "/usr/lib/python3.6/site-packages/pip/req/req_install.py", line 754, in uninstall
    paths_to_remove.remove(auto_confirm)
  File "/usr/lib/python3.6/site-packages/pip/req/req_uninstall.py", line 115, in remove
    renames(path, new_path)
  File "/usr/lib/python3.6/site-packages/pip/utils/__init__.py", line 267, in renames
    shutil.move(old, new)
  File "/usr/lib64/python3.6/shutil.py", line 565, in move
    os.unlink(src)
PermissionError: [Errno 13] Permission denied: '/usr/local/lib/python3.6/site-packages/amazon.ion-0.7.0-py3.6-nspkg.pth'
[centos@ip-10-204-101-224] ~
$ sudo pip3 install qldbshell
WARNING: Running pip install with root privileges is generally not a good idea. Try `pip3 install --user` instead.
Collecting qldbshell
Requirement already satisfied: urllib3<1.27,>=1.25.4 in /usr/local/lib/python3.6/site-packages (from botocore<1.21.0,>=1.20.21->boto3>=1.9.237->qldbshell)
Installing collected packages: amazon.ion, qldbshell
  Found existing installation: amazon.ion 0.7.0
    Uninstalling amazon.ion-0.7.0:
      Successfully uninstalled amazon.ion-0.7.0
  Running setup.py install for amazon.ion ... done
  Running setup.py install for qldbshell ... done
Successfully installed amazon.ion-0.5.0 qldbshell-1.2.0


$ sudo pip3 install qldbshell

$ qldbshell
Traceback (most recent call last):
  File "/usr/local/bin/qldbshell", line 11, in 
    load_entry_point('qldbshell==1.2.0', 'console_scripts', 'qldbshell')()
  File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 476, in load_entry_point
    return get_distribution(dist).load_entry_point(group, name)
  File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2700, in load_entry_point
    return ep.load()
  File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2318, in load
    return self.resolve()
  File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2324, in resolve
    module = __import__(self.module_name, fromlist=['__name__'], level=0)
  File "/usr/local/lib/python3.6/site-packages/qldbshell/__main__.py", line 25, in 
    from pyqldb.driver.pooled_qldb_driver import PooledQldbDriver
ModuleNotFoundError: No module named 'pyqldb.driver.pooled_qldb_driver'
$ pip list qldbshell
DEPRECATION: The default format will switch to columns in the future. You can use --format=(legacy|columns) (or define a format=(legacy|columns) in your pip.conf under the [list] section) to disable this warning.
amazon.ion (0.5.0)
boto3 (1.17.21)
botocore (1.20.21)
ionhash (1.1.0)
jmespath (0.10.0)
pip (9.0.3)
prompt-toolkit (3.0.16)
pyqldb (3.1.0)
python-dateutil (2.8.1)
qldbshell (1.2.0)
s3transfer (0.3.4)
setuptools (39.2.0)
six (1.15.0)
urllib3 (1.26.3)

So, uninstalled and re-installed and voila, my data.

$ qldbshell
usage: qldbshell [-h] [-v] [-s QLDB_SESSION_ENDPOINT] [-r REGION] [-p PROFILE]
                 -l LEDGER
qldbshell: error: the following arguments are required: -l/--ledger
$ qldbshell -l demo

Welcome to the Amazon QLDB Shell version 1.2.0
Use 'start' to initiate and interact with a transaction. 'commit' and 'abort' to commit or abort a transaction.
Use 'start; statement 1; statement 2; commit; start; statement 3; commit' to create transactions non-interactively.
Use 'help' for the help section.
All other commands will be interpreted as PartiQL statements until the 'exit' or 'quit' command is issued.

qldbshell >

qldbshell > SELECT * FROM sample;                                                                                                                           
INFO:
{
 id: "1",
 name: "Demo Row",
 location: "USA",
 domain: ""
}
INFO: (0.1718s)

qldbshell > \q                                                                                                                                              
WARNING: Error while executing query: An error occurred (BadRequestException) when calling the SendCommand operation: Lexer Error: at line 1, column 1: invalid character at, '\' [U+5c];
INFO: (0.1134s)
qldbshell > exit                                                                                                                                            
Exiting QLDB Shell

Right \q is a mysqlism of the client, need to rewire myself.

Now, I have a ledger, I created an example table, mocked a row of data and verified. Now I can just load my sample data in JSON I created earlier right? Wrong!!!

$ cat load.py
import json
from pyqldb.config.retry_config import RetryConfig
from pyqldb.driver.qldb_driver import QldbDriver

# Configure retry limit to 3
retry_config = RetryConfig(retry_limit=3)

# Initialize the driver
print("Initializing the driver")
qldb_driver = QldbDriver("demo", retry_config=retry_config)

def insert_record(transaction_executor, table, values):
  print("Inserting into {}".format(table))
  transaction_executor.execute_statement("INSERT INTO {} ?".format(table),  values)


table="sample"

with open('sample.json') as f:
  data=json.load(f)

qldb_driver.execute_lambda(lambda x: insert_record(x, table, data))

$ python load.py
Traceback (most recent call last):
  File "load.py", line 2, in 
    from pyqldb.config.retry_config import RetryConfig
ModuleNotFoundError: No module named 'pyqldb'
[centos@ip-10-204-101-224] ~

Oh sweet, I'd installed that, and used it, and re-installed it.

$ pip list | grep pyqldb
DEPRECATION: The default format will switch to columns in the future. You can use --format=(legacy|columns) (or define a format=(legacy|columns) in your pip.conf under the [list] section) to disable this warning.
[centos@ip-10-204-101-224] ~
$ sudo pip3 install pyqldb
WARNING: Running pip install with root privileges is generally not a good idea. Try `pip3 install --user` instead.
Collecting pyqldb
Requirement already satisfied: boto3<2,>=1.16.56 in /usr/local/lib/python3.6/site-packages (from pyqldb)
Requirement already satisfied: botocore<2,>=1.19.56 in /usr/local/lib/python3.6/site-packages (from pyqldb)
Requirement already satisfied: ionhash<2,>=1.1.0 in /usr/local/lib/python3.6/site-packages (from pyqldb)
Requirement already satisfied: six in /usr/local/lib/python3.6/site-packages (from amazon.ion<1,>=0.7.0->pyqldb)
Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /usr/local/lib/python3.6/site-packages (from boto3<2,>=1.16.56->pyqldb)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /usr/local/lib/python3.6/site-packages (from boto3<2,>=1.16.56->pyqldb)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /usr/local/lib/python3.6/site-packages (from botocore<2,>=1.19.56->pyqldb)
Requirement already satisfied: urllib3<1.27,>=1.25.4 in /usr/local/lib/python3.6/site-packages (from botocore<2,>=1.19.56->pyqldb)
Installing collected packages: amazon.ion, pyqldb
  Found existing installation: amazon.ion 0.5.0
    Uninstalling amazon.ion-0.5.0:
      Successfully uninstalled amazon.ion-0.5.0
  Running setup.py install for amazon.ion ... done
  Running setup.py install for pyqldb ... done
Successfully installed amazon.ion-0.7.0 pyqldb-3.1.0

Load one more time.


$ cat load.py
import json
from pyqldb.config.retry_config import RetryConfig
from pyqldb.driver.qldb_driver import QldbDriver

# Configure retry limit to 3
retry_config = RetryConfig(retry_limit=3)

# Initialize the driver
print("Initializing the driver")
qldb_driver = QldbDriver("demo", retry_config=retry_config)

def insert_record(transaction_executor, table, values):
  print("Inserting into {}".format(table))
  transaction_executor.execute_statement("INSERT INTO {} ?".format(table),  values)


table="sample"

with open('sample.json') as f:
  data=json.load(f)

qldb_driver.execute_lambda(lambda x: insert_record(x, table, data))

$ python load.py
Initializing the driver
Inserting into sample

And done, I've got my JSON extracted MySQL 8 data in QLDB. I go to vett it in the client, and boy, didn't expect yet another package screw up. Clearly, these 2 AWS python packages are incompatible. That's a venv need, but I'm now at double my desired time to show this.

$ qldbshell -l demo
Traceback (most recent call last):
  File "/usr/local/bin/qldbshell", line 11, in 
    load_entry_point('qldbshell==1.2.0', 'console_scripts', 'qldbshell')()
  File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 476, in load_entry_point
    return get_distribution(dist).load_entry_point(group, name)
  File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2700, in load_entry_point
    return ep.load()
  File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2318, in load
    return self.resolve()
  File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2324, in resolve
    module = __import__(self.module_name, fromlist=['__name__'], level=0)
  File "/usr/local/lib/python3.6/site-packages/qldbshell/__main__.py", line 25, in 
    from pyqldb.driver.pooled_qldb_driver import PooledQldbDriver
ModuleNotFoundError: No module named 'pyqldb.driver.pooled_qldb_driver'
[centos@ip-10-204-101-224] ~
$ pip list | grep qldbshell
DEPRECATION: The default format will switch to columns in the future. You can use --format=(legacy|columns) (or define a format=(legacy|columns) in your pip.conf under the [list] section) to disable this warning.
qldbshell (1.2.0)


$ sudo pip uninstall qldbshell pyqldb

$ sudo pip install qldbshell
WARNING: Running pip install with root privileges is generally not a good idea. Try `pip3 install --user` instead.
Collecting qldbshell
  Downloading Requirement already satisfied: boto3>=1.9.237 in /usr/local/lib/python3.6/site-packages (from qldbshell)
Requirement already satisfied: amazon.ion<0.6.0,>=0.5.0 in /usr/local/lib/python3.6/site-packages (from qldbshell)
Requirement already satisfied: prompt_toolkit<3.1.0,>=3.0.5 in /usr/local/lib/python3.6/site-packages (from qldbshell)
Requirement already satisfied: ionhash~=1.1.0 in /usr/local/lib/python3.6/site-packages (from qldbshell)
Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /usr/local/lib/python3.6/site-packages (from boto3>=1.9.237->qldbshell)
Requirement already satisfied: botocore<1.21.0,>=1.20.21 in /usr/local/lib/python3.6/site-packages (from boto3>=1.9.237->qldbshell)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /usr/local/lib/python3.6/site-packages (from boto3>=1.9.237->qldbshell)
Requirement already satisfied: six in /usr/local/lib/python3.6/site-packages (from amazon.ion<0.6.0,>=0.5.0->qldbshell)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.6/site-packages (from prompt_toolkit<3.1.0,>=3.0.5->qldbshell)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /usr/local/lib/python3.6/site-packages (from botocore<1.21.0,>=1.20.21->boto3>=1.9.237->qldbshell)
Requirement already satisfied: urllib3<1.27,>=1.25.4 in /usr/local/lib/python3.6/site-packages (from botocore<1.21.0,>=1.20.21->boto3>=1.9.237->qldbshell)
Installing collected packages: qldbshell
  Running setup.py install for qldbshell ... done
Successfully installed qldbshell-1.2.0

Can I see my data now


$ qldbshell -l demo

Welcome to the Amazon QLDB Shell version 1.2.0
Use 'start' to initiate and interact with a transaction. 'commit' and 'abort' to commit or abort a transaction.
Use 'start; statement 1; statement 2; commit; start; statement 3; commit' to create transactions non-interactively.
Use 'help' for the help section.
All other commands will be interpreted as PartiQL statements until the 'exit' or 'quit' command is issued.

qldbshell > select * from sample;                                                                                                                           
INFO:
{
 id: 1,
 name: "Demo Row",
 location: "USA",
 domain: null
},
{
 id: 1,
 name: "Demo Row",
 location: "USA",
 domain: null
},
{
 id: "1",
 name: "Demo Row",
 location: "USA",
 domain: ""
},
{
 id: 3,
 name: "Kiwi",
 location: "NZ",
 domain: null
},
{
 id: 2,
 name: "Row 2",
 location: "AUS",
 domain: "news.com.au"
},
{
 id: 3,
 name: "Kiwi",
 location: "NZ",
 domain: null
},
{
 id: 2,
 name: "Row 2",
 location: "AUS",
 domain: "news.com.au"
}
INFO: (0.0815s)

And yes, data, I see it's duplicated, so I must have in between the 10 steps run twice. This does highlight a known limitation of QLDB, no unique constraints.

But wait, that data is not really correct, I don't want null. Goes back to the JSON to see the MySQL shell gives that.

$ jq . sample.json
[
  {
    "id": 1,
    "name": "Demo Row",
    "location": "USA",
    "domain": null
  },
...

At some point I also got this load error, but by now I've given up documenting how to do something, in order to demonstrate something.

NameError: name 'null' is not defined

One has to wrap the only nullable column with IFNULL(subdomain,'') as subdomain and redo all those steps again. This is not going to be practical having to wrap all columns in a wider table with IFNULL.

However, having exhausted all this time for what was supposed to be a quiet weekend few hours, my post is way to long, and I've learned "Creating examples can be hard".

#WDILTW – What can I run from my AWS Aurora database

When you work with AWS Aurora you have limited admin privileges. There are some different grants for MySQL including SELECT INTO S3 and LOAD FROM S3 that replace the loss of functionality to SELECT INTO OUTFILE and mysqldump/mysqlimport using a delimited format. While I know and use lambda capabilities, I have never executed anything with INVOKE LAMDBA directly from the database.

This week I found out about INVOKE COMPREHEND (had to look that product up), and INVOKE SAGEMAKER (which I used independently). These are machine learning capabilities that enable you to build custom integrations using Comprehend and SageMaker. I did not have any chance to evaluate these capabilities so I am unable to share any use cases or experiences. There are two built-in comprehend functions aws_comprehend_detect_sentiment() and aws_comprehend_detect_sentiment_confidence(), a likely future starting place. Sagemaker is invoked as an extension of a CREATE FUNCTION that provides the ALIAS AWS_SAGEMAKER_INVOKE_ENDPOINT syntax.

Also available are some MySQL status variables including Aurora_ml_logical_response_cnt, Aurora_ml_actual_request_cnt, Aurora_ml_actual_response_cnt, Aurora_ml_cache_hit_cnt, Aurora_ml_single_request_cnt.

Some googling found an interesting simple example, calculating the positive/negative sentiment and confidence of sentences of text. I could see this as useful for analyzing comments. I’ve included the example from this site here to encourage my readers to take a look as I plan to do. Post IAM configuration I will be really curious to evaluate the responsiveness of this example. Is this truly a batch only operation or could you return some meaningful response timely?

This also lead to bookmarking for reading https://awsauroralabsmy.com/, https://github.com/aws-samples/amazon-aurora-labs-for-mysql/ and https://squidfunk.github.io/mkdocs-material/ all from this one page.

#WDILTW – To use a RDBMS is to use a transaction

I learned this week that 30+ years of Relational Database Management System (RDBMS) experience still does not prepare yourself for the disappointment of working with organizations that use a RDBMS; MySQL specifically; have a released production product, have dozens to hundreds of developers, team leaders and architects, but do not know the importance of, nor use transactions. If I was to ask this when interviewing somebody that would work with a database and the response was it is not important, or not used these days it would be a hard fail.

To use a RDBMS is to understand a very simple principle, a foundation of a transactional system. It is called ACID. Atomicity, Consistency, Isolation, Durability.

In a simplistic description of this 50 year principle.

  • A – It’s all or nothing
  • C – Your work meets all existing constraints
  • I – Your work is independent and not affected by other concurrent work
  • D – Your work is reproducible in the event of a hard failure

A – Atomicity means a transaction. It’s not rocket science, it’s actually in the description of a RDBMS. What is a transaction? It is a means in Structure Query Language (SQL) notation that can ensure your unit of work is all or nothing.

The most basic of examples is very simple. You want to deposit a check that was given to you. For the bank to accept this deposit, it needs to ensure that the account holder that issued the check has sufficient funds, that is Step 1: subtract amount X from account A, and Step 2: deposit in account B. You cannot do Step 1 or Step 2 independently, you must do 1 and 2 together, otherwise real money could be lost or created from thin air. The lack of transactions also can be affected by not handling I – Isolation.

We have to ask our educators and responsible professionals why? Why can such a bedrock principle not be used? Why in a team of individuals in a company, one single individual from the years of development and support not say, well you have to use a transaction to ensure the integrity of the data you are working with? It’s called a database for a reason.

The unfortunately reality is there are several reasons for this utter failure, and the frustration of professionals like myself.

  1. RDBMS is not the bedrock of data management it was 20,30 even 50 years ago. Today with a proliferation of different products, transactions are simply not taught. Even SQL is not taught, yet most products that want to enable users to query data ultimately provide a SQL-like interface. Many users today want a drag & drop GUI interface but not realize that is not how you manipulate data. Hadoop was a life change with map-reduce approach to large datasets. iIt did not start with SQL, but it has a widely used SQL interface now. You look at the newest hot products like snowflake. What is it’s method of data access, SQL.
  2. MySQL, the most popular open source database does not enforce transactions, it’s optional. What you say? MySQL has some unique features including the capability for multiple storage engines, that provide different features and capabilities, such as index approaches, consistency and support for transactions. An entire generation of open source products were released and do not use transactions, but use a transactional storage engine within a transactional product.
  3. Still on the topic of MySQL, it also does not enforce strict handling like other RDBMS products. The default for decades was to enable an SQL statement to corrupt the data integrity, i.e. C – Consistency. Because MySQL supported warnings and they were always not checked for by developers, data in did not guarantee data out.
  4. Every mini-generation (e.g. 5 years) of new developers think they know more, than seasoned professionals.

As it was for me in the 80s, it should be for use of a relational database; and even a non-relational database; to read and understand the seminal works of An introduction to Database Systems by C.J. Date. It has been required learning at universities for decades, but it seems to have lost is favor with multiple iterations of new software developers.

#3 – What Did I learn this week. WDILTW.

Defensive Data Techniques

As a data architect I always ensure that for any database schema change there a fully recoverable execution path.
I have generally advised to create a patch/revert process for every change.  For example, if a change adds a new column or index to a table, a revert script would remove the respective column or index.
The goal is to always have a defensive position for any changes. The concept is that simple, it is not complex.

In its simplest form I use the following directory and file structure.

/schema
    schema.sql
    /patch
        YYYYMMDDXX.sql     where XX,ZZ are sequential 2 digit numbers, e.g. 01,02
        YYYYMMDDZZ.sql
   /revert
       YYYYMMDDXX.sql   This is the same file name in the revert sub-directory.
       YYYYMMDDZZ.sql

At any commit or tag in configuration management it is possible to create a current copy of the schema, i.e. use schema.sql.
It is also possible to take the first version of schema.sql and apply chronologically all the patch scripts to arrive at the same consistent structure of the schema that is in schema.sql. You can also run a validation process to confirm these are equivalent.
For each tagged version or commit of this directory structure and files in version control, this should always hold true.
While not the desired execution path, every revert script can be applied in a reverse chronological order and return to the first version of the schema.
If you want to maintain a first_schema.sql file within the directory structure, you can always create any version of the schema from a given commit in a roll-forward or roll-back scenario.

In reality however this is rarely implemented. There is always divergence or drift. Drift occurs for several primary reasons. The first is non-adherence to the defined process. The second and more critical is the lack of adequate testing and verification at each and every step.  A Test Driven Design (TDD) that validates the given approach would enable a verification of end state of the schema and enable the verification at each accumulated

In addition to each patch/revert there needs to be a state that is maintained of what has been applied.  Generally for RDBMS storing this metadata within a table is recommended.

The above example shows files of .sql extension. Any schema management process needs to cater for .sh, .py or other extensions to cater for more complex operations.
 
What about data changes?  I would recommend that for all configuration information you follow the same management principles as for schema objects, that is you have a patch to insert/update/delete data, and you have a revert script that can restore that data.  Generally the complexity of the rollback process is a hurdle for developers/engineers. Having a framework is important to manage how data consistency is maintained. This framework could generate a statement to restore the data (e.g. a selective mysqldump), require a hand-crafted statement, or leverage the benefit of the RDBMS by storing the data into intermediate shallow tables.

Using a least privileged model complicates an applicable framework approach. Does the user applying the change now require the FILE privilege, or CREATE/DROP privilege to create tables for the ability to restore data.

If there is strict referential integrity at the database level, those protections will defend against unintended consequences. For example, deleting a row that is dependent on a foreign key relationship.  In a normal operating system accommodations are made generally for the sake of performance, but also for supporting poor data cleansing requirements. If the application maintains a level of referential integrity, the schema management process also needs to support this, adding a further complexity.  Ensuring data integrity is an important separate topic. If there is a dangling row, what is the impact? The data still exists, it is just not presented in a user interface or included in calculations. This generally leads to greater unintended consequences that are generally never obvious at the time of execution, but rather days, weeks or months later.

When it comes to objects within the structure of an RDBMS the situation is more complex.  A classic example in MySQL is a user.  A user in MySQL is actually the user definition which is just the username, password and host.  A user contains one or more grants. The user may be the owner of additional objects. Using default and legacy MySQL, it is simply not possible to determine if a user is actually being used. Percona and other variances support INFORMATION_SCHEMA.USER_STATISTICS which is a better method of evaluating the use of a user.  This does however require the intervention of time-based data collection, as this table is the accumulative statistics since an instance restart or flush.

With this type of object, or meta object several defensive techniques exist.  

If you had the user `blargie` and that user had grants to read data from several schemas, is the user used?  I don’t think so, let’s just delete it is not a fact-based approach to avoiding a subsequent problem.
Is the user used? Let’s revoke the users privileges and monitor for errors or user feedback? Or let’s change the user’s password?  With each of these strategies it is important to always have a defensive process to rollback.
A different approach is to use a common data technique of marking information as deleted before it’s physically deleted (think trash can before you empty the trash).  For MySQL users there is no default functionality (in the most recent versions of MySQL you can DISABLE a user).  One implementation to apply this pattern is to rename the user, which has the benefit of keeping the user’s password and privileges intack, therefore reducing the amount of complexity in restoring.

Regardless of the technique, it is important there is always a recovery path.  In a subsequent post I will discuss this approach towards cloud metadata, for example an AWS KMS policy, IAM Rule or ASG setting and the impact of  Infrastructure as a Service (IaaS) such as Terraform.

More reading https://en.wikipedia.org/wiki/Test-driven_development, https://en.wikipedia.org/wiki/Defensive_programming

MySQL Data Security Risk Assessment presentation

Securing your data is only as good as your weakest link. A clear-text password in a file or history file, shared privileges between test and production or open sudo access when you can connect as an unprivileged user all are security flaws. This talk discusses how to navigate the poor defaults MySQL has in place, how to strengthen processes and how to audit your environment. It also covers the complexity of deploying changes in an always available production environment.

Presented at the Data.Ops Conference in Barcelona, Spain.
Download slides

Identifying MySQL SSL communication using ngrep

Prior to MySQL 5.7 client communications with a MySQL instance were unencrypted by default. This plaintext capability allowed for various tools including pt-query-digest to analyze TCP/IP traffic. Starting with MySQL 5.7 SSL is enabled by default and will be used in client communications if SSL is detected on the server.

We can detect and confirm this easily with the ngrep command.

Default client connection traffic (5.6)

On a MySQL 5.6 server we monitor the default MySQL port traffic.

mysql56$ sudo ngrep -d eth1 -wi -P ' ' -W single -l port 3306
interface: eth1 (192.168.42.0/255.255.255.0)
filter: ( port 3306 ) and ((ip || ip6) || (vlan && (ip || ip6)))
...

We connect to this server using TCP/IP.

host$ mysql -uexternal -p -h192.168.42.16
mysql56> select 'unencrypted';

We can observe the communication to and from the server (in this example 192.168.42.16) is plaintext.

...
#
T 192.168.42.1:47634 -> 192.168.42.16:3306 [AP]      select 'unencrypted'
#
T 192.168.42.16:3306 -> 192.168.42.1:47634 [AP]      !    def    unencrypted  ! !                       unencrypted
#

SSL System Variables (5.6 default)

A default 5.6 installation does not have SSL enabled as verified by system variables.

mysql56 >SHOW  VARIABLES  LIKE '%ssl%';
+---------------+----------+
| Variable_name | Value    |
+---------------+----------+
| have_openssl  | DISABLED |
| have_ssl      | DISABLED |
| ssl_ca        |          |
| ssl_capath    |          |
| ssl_cert      |          |
| ssl_cipher    |          |
| ssl_crl       |          |
| ssl_crlpath   |          |
| ssl_key       |          |
+---------------+----------+
9 rows in set (0.02 sec)

Default client connection traffic (5.7)

Running the same example client connection with MySQL 5.7 you will observe that communications to and from the server (in this example 192.168.42.17) are not in plaintext.

mysql57$ sudo ngrep -d eth1 -wi -P ' ' -W single -l port 3306
interface: eth1 (192.168.42.0/255.255.255.0)
filter: ( port 3306 ) and ((ip || ip6) || (vlan && (ip || ip6)))

host$ mysql -uexternal -p -h192.168.42.17
mysql57> select 'encrypted';


T 192.168.42.1:36781 -> 192.168.42.17:3306 [AP]     @    F   l   d iVr  H   b ^    s t Z      ( 2d   " ?  |   )
#
T 192.168.42.17:3306 -> 192.168.42.1:36781 [AP]     p%  s`   3u5!%P]   v=  r # x   E   a y  '!    )Z    8   Js  z.  \t   (r H@     0 2 5k\    <   M  @)E& b q|q@    h

SSL System Variables (5.7 default)

A new MySQL 5.7 installation will have SSL enabled by default as seen in the MySQL variables.

mysql57 > SHOW GLOBAL VARIABLES LIKE '%ssl%';
+---------------+-----------------+
| Variable_name | Value           |
+---------------+-----------------+
| have_openssl  | YES             |
| have_ssl      | YES             |
| ssl_ca        | ca.pem          |
| ssl_capath    |                 |
| ssl_cert      | server-cert.pem |
| ssl_cipher    |                 |
| ssl_crl       |                 |
| ssl_crlpath   |                 |
| ssl_key       | server-key.pem  |
+---------------+-----------------+
9 rows in set (0.00 sec)

-no-ssl Client connection traffic (5.7)

If you want to emulate the unencrypted nature of MySQL 5.6 within any SSL enabled MySQL version (e.g. MySQL 5.7) you specify the --ssl option for mysql client connections. In MySQL 5.7 this option is also deprecated and --ssl-mode=disabled should be used>.

host$ > mysql -uexternal -p -h192.168.42.17 --ssl=0

host >select '-ssl=0 unencrypted';

T 192.168.42.1:36785 -> 192.168.42.17:3306 [AP]      select '-ssl=0 unencrypted'
#
T 192.168.42.17:3306 -> 192.168.42.1:36785 [AP]      '    def    -ssl=0 unencrypted  ! 3              -ssl=0 unencrypted

References

https://wiki.christophchamp.com/index.php?title=Ngrep
http://infoheap.com/ngrep-quick-start-guide/
Encrypted Connections - MySQL 5.7 Reference Manual
Implementing MySQL Security Features - Tutorial at Percona Live Europe 2017.

Testing MySQL/MariaDB/Percona versions with Docker

Giuseppe Maxia has provided some great MySQL docker images. Percona and MariaDB also provide version via Docker Hub. In an attempt to have a consistent means of launching these different images I created the following convenience functions.

  1. Install docker for your OS. See Official Docker installation instructions.
  2. Get dockerhelper.sh
  3. Run your desired variant and version.

$ wget https://raw.githubusercontent.com/ronaldbradford/mysql-docker-minimal/master/dockerhelper.sh
$ . ./dockerhelper.sh
Docker Registered functions are:  docker_mysql, docker_percona, docker_mariadb

$ docker_mysql
ERROR: Specify a MySQL version to launch. Valid versions are 5.0 5.1 5.5 5.6 5.7 8.0
$ docker_percona
ERROR: Specify a Percona version to launch. Valid versions are 5.5 5.6 5.7
$ docker_mariadb
ERROR: Specify a MariaDB version to launch. Valid versions are 5.5 10.0 10.1

OTN appreciation day: The Performance Schema of MySQL 5.6+

To focus on just one point for OTN appreciation day on October 11 2016 and to the benefit of all users of MySQL is to consider the extremely convenient and rich value of information available in the MySQL Performance Schema to understand what SQL queries are running in a MySQL instance now. The MySQL Performance Schema in MySQL 5.6 is enabled by default, (performance_schema=on).

The following one off SQL statement will enable the instrumentation of SQL statements in the most detailed level of assessment.

The following query will show you the longest running queries in your database at this present time.

This ease of accessing what is running in a MySQL instance replaces many different and creative techniques as I describe in Improving MySQL Performance with Better Indexes in versions of MySQL before version 5.6.

If your organization does not have dedicated performance experts reviewing new functionality consistently and monitoring your production systems regularly for database optimization, the cost of having the MySQL performance schema available and with a large number of different forms of instrumentation out ways any reason not to.

One of the best presentations at Percona Live Amsterdam last week in the last time slot of the event (before beer and food) was Performance schema and sys schema by Mark Leith. I hope to provide a review of this presentation soon and my interest to explore the new MySQL 5.7 and 8.0 performance schema instruments. A few of my live tweets included:

MySQL 5.7 & 8.0 Performance Schema


Why I wrote this appreciation?

On Friday I was asked to review the MySQL performance and load of a newly developed product during simulated tested. When I was first given access to the MySQL database server I was very disappointed that for a new and unreleased product the MySQL version chosen was 5.5. This is in no way disrespectful for the great stability, functionality and features of MySQL 5.5, however for any new system under development MySQL 5.6 and MySQL 5.7 are both much more appropriate options for many reasons. If for no other reason to look at upgrading to at least MySQL 5.6 to enable you to become a better expert with this functionality is one key consideration.


OTN Appreciation Day

This post format was suggested by Tim Hall, a well known community champion among Oracle users, who maintains a rich web site of news and free technical info. According to his suggestion, this post wants to add to the OTN appreciation day, a distributed community effort to show something useful, or pleasant, or both related to the Oracle world.

For those not used to the Oracle Technology Network (OTN), it is the center of Oracle technology, the place where users can get all software (proprietary or open source) and other resources related to Oracle products. In the image below you may find several familiar names.

MySQL Group Replication OOW Tutorial

The second MySQL tutorial session at Oracle Open World was “MySQL Group Replication in a Nutshell” by MySQL Community Manager Frederic Descamps. This is succinctly described as:

“Multi-master update anywhere replication for MySQL with built-in conflict detection and resolution, automatic distributed recovery, and group membership.”

MySQL Group Replication (GR) is a virtually synchronous replication solution which is an integral component of MySQL InnoDB Cluster announced at the MySQL keynote. You can download a labs version of MySQL InnoDB cluster which includes three components.

  • MySQL Router
  • MySQL Shell
  • MySQL Group Replication

While included as part of MySQL InnoDB cluster, MySQL Group Replication can be run standalone. It is a plugin, made by and packaged by MySQL. With the plugin architecture in MySQL 5.7 the ability to release new features is greatly reduced from the more typical 2+ year general availability (GA) cycle. Plugins also allow for functionality to be not enabled by default therefore preserving the stability of an existing MySQL instance running version 5.7. This is a change in the philosophy of new functionality that I discussed in Understanding the MySQL Release Cadence which in 5.7.13 introduced the SQL interface for keyring key management. Not all in the community are happy however I consider it an important requirement for time-to-market in a fast paced open source data ecosystem.

MySQL GR is based on Replicated Database State Machine Theory and uses Paxos for evaluating consensus of available nodes in the cluster, being referred to as the Group Communication System (GCS). This is one key difference with Galera as the Paxos approach relies on accepting the certification stage within the cluster after a major of the nodes have acknowledged, rather than all nodes. MySQL GR is supported on a wide range of platforms including Linux, Windows, FreeBSD and Mac OS X, another difference with Galera.

The current Release Candidate (RC) version of MySQL Group Replication has some required configurations and some situations for applications that may not be ideal use cases for a synchronous solution. There is the complexities in the migration process of any existing infrastructure to considering MySQL Group Replication, which has at a minimum requirements of MySQL 5.7, GTID’s and row based replication. I would like to see MySQL put a lot more effort into the education and promotion of MySQL migrations from older versions to the current MySQL 5.7. Ideally I’d like to see better tools starting with MySQL 5.0 which I still see in production operation.

Some things are just the impact of current development priorities. The shell does not offer a means to promote a master in a single write configuration, i.e. the only way to simulate a failure is to produce a failure, which really means your three node cluster is no longer highly available. The use of savepoints is not currently available, a needed feature for future full compatibility for use in an OpenStack deployment. The creation of a cluster via the MySQL shell requires you to make the decision of supporting multi-master writes or a single master write. I can see the ideal need to be able to better support large batch transactions and DDL (some of those edge cases) to be able to toggle to a single write master and back. The current workaround is to utilize MySQL router to simulate this use case. The MySQL shell greatly reduces the complexity of orchestration. One of the features I like is a very convenient means to validate an Instance to see if the configuration matches minimum requirements. For example:

$ mysqlsh
> dba.validateInstance('root@mysql3:3306')

...
ERROR: Error executing the 'check' command: The operation could not
continue due to the following requirements not being met:
Some active options on server 'mysql3@3306' are incompatible with Group
Replication.
Please restart the server 'mysql3@3306' with the updated options file
and try again.
Option name                      Required Value   Current Value
Result
-------------------------------  ---------------  ---------------
-----
binlog_checksum                  NONE             CRC32            FAIL
master_info_repository           TABLE            FILE             FAIL
relay_log_info_repository        TABLE            FILE             FAIL
transaction_write_set_extraction  XXHASH64         OFF
FAIL
 at (shell):1:4
in

Something you can now do dynamically and persist in MySQL 8.0 using the SET PERSIST syntax.

The overall setup in a greenfield application is reasonable clear and will improve as the product moves towards general availability. The MySQL shell has a lot of future potential in a number of administrative functions, and the ability to switch easily between JavaScript and SQL means you can get the best of multiple languages.

In subsequent posts I will look into more of the detail of setup and monitoring of a cluster with performance_schema. I hope that existing monitoring tools will also start to support monitoring Group Replication. As the author of the New Relic MySQL Plugin in 2013 I may need to get motivated to offer a SaaS solution also.

You can find more information with official blog posts on MySQL Group Replication.

Presentations at Percona Live Amsterdam 2016

I was fortunate enough to give four presentations at the Percona Live 2016 event in Amsterdam. The slides for these are now available.

New UUID functions in MySQL 8.0.0

MySQL 8.0.0 introduces three new miscellaneous UUID functions of IS_UUID(), UUID_TO_BIN() and BIN_TO_UUID() joining the UUID() (in 5.0) and UUID_SHORT() (in 5.1) functions. See 8.0.0 Release Notes.

Thanks to the great work and hosting by Marcus Popp anybody can test out the SQL syntax of MySQL 8.0.0 using db4free without installing anything. If you want a minimal install Giuseppe Maxia provides docker minimal images of 5.0+ versions including 8.0.0.

A running docker container with MySQL 8.0 is as easy as:

The following script shows the usage and checks of these new functions.

Historically, to encode a UUID into a BINARY(16) datatype was to use UNHEX(REPLACE()) syntax. There was however no easy to unencode a BINARY(16) into the original value. BIN_TO_UUID() as shown in the output below solves this problem.

mysql> SELECT IS_UUID(1);
+------------+
| IS_UUID(1) |
+------------+
|          0 |
+------------+
1 row in set (0.01 sec)

mysql> SET @uuid='aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee';
Query OK, 0 rows affected (0.00 sec)

mysql> SELECT IS_UUID(@uuid) AS is_uuid;
+---------+
| is_uuid |
+---------+
|       1 |
+---------+
1 row in set (0.01 sec)

mysql> SELECT IS_UUID(REPLACE(@uuid,'-','')) AS is_uuid;
+---------+
| is_uuid |
+---------+
|       1 |
+---------+
1 row in set (0.00 sec)

mysql> SELECT @uuid_bin := UUID_TO_BIN(@uuid) AS uuid_bin, LENGTH(@uuid_bin) AS len;
+------------------+------+
| uuid_bin         | len  |
+------------------+------+
| ���������������� |   16 |
+------------------+------+
1 row in set (0.00 sec)

mysql> SELECT @old_uuid_bin := UNHEX(REPLACE(@uuid,'-','')) AS old_uuid_bin, LENGTH(@old_uuid_bin) AS len;
+------------------+------+
| old_uuid_bin     | len  |
+------------------+------+
| ���������������� |   16 |
+------------------+------+
1 row in set (0.00 sec)

mysql> SELECT @uuid_bin = @old_uuid_bin;
+---------------------------+
| @uuid_bin = @old_uuid_bin |
+---------------------------+
|                         1 |
+---------------------------+
1 row in set (0.00 sec)

mysql> SELECT BIN_TO_UUID(@uuid_bin) AS uuid, HEX(@old_uuid_bin) AS uuid_old;
+--------------------------------------+----------------------------------+
| uuid                                 | uuid_old                         |
+--------------------------------------+----------------------------------+
| aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee | AAAAAAAABBBBCCCCDDDDEEEEEEEEEEEE |
+--------------------------------------+----------------------------------+
1 row in set (0.01 sec)

Introducing the MySQL Cloud Service

The MySQL keynote at Oracle Open World 2016 announced the immediate availability of the MySQL Cloud Service, part of the larger Oracle Cloud offering. You can evaluate this now with a trial copy at cloud.oracle.com/mysql. MySQL server product manager Morgan Tocker gave two presentations at the event including a deep dive session.

This is the first release of the MySQL cloud service. As with all first releases there are some highlights and some pipeline features. All major cloud providers have MySQL offerings. AWS RDS (traditional, MAZ and Aurora) GCP Cloud SQL and Azure MySQL App Service. Users of OpenStack have Trove for comparison. I am not going to be evaluating features between cloud offerings in this post.

Highlights

The differentiating highlights as I see them from the presentation. I will provide a followup blog on actual usage at a later time.

  • MySQL 5.7
  • MySQL Enterprise Edition (a key difference with other cloud providers)
    • MySQL Enterprise features like Firewall, Thread Pool, Auditing etc
    • MySQL Enterprise support is included in price
    • MySQL Enterprise Monitor (MEM) is available and included in price
  • SSH access to machine
    • SSH access is a non-privileged user (opc). This shows and intention on security first policy.
  • Separated partitioning in OS/MySQL disk layout
  • ZFS. (Nice, I have missed using this)
  • Optimized partition workloads different for data and sequential logging
  • Two predefined backup policies, ZFS appliance (7 day retention) and cloud storage (30 day retention)
  • The managed backup philosophy is a weekly full backup, and daily incrementals
  • Sane default MySQL configuration (my.cnf)
  • Patching notification and capability. Automated backup before patching, and rollback capability
  • The Ksplice Oracle UEK functionality for improved host uptime with security vulnerabilities or kernel improvements

Overall an A effort on paper in V1 with the willingness to be useful, sane and flexible. In a future post I will evaluate the actual MySQL cloud capabilities with the available trial.

Observations

Features and functionality I see missing from V1 during this presentation. Some are features I would like to see, some are just observations, and some are likely present features but not discussed. I will leave it up the reader to decide which is which.

  • No MySQL 5.6. There was mention of supporting two versions in future moving forward (i.e. 5.7 and 8)
  • Separated MEM configuration and management. See my later thoughts on this.
  • MySQL topologies and easy to apply templates, including the future MySQL InnoDB Cluster
  • A longer archive storage retention capability policy for backups and/or binary logs (e.g. for compliance reasons)
  • The size of the pre-defined dedicated logging partition and binary logging may be problematic for longer retention capacity
  • Provisioned IOPS capacity or performance guarantees for Disk I/O
  • An ability to define MySQL configuration templates (e.g. dev, test, prod, master, slave etc) and be able to select these for a new instance deployment. You can of course manage this after the fact manually.
  • The compute workloads are more generic at present. There appears to be no optimized disk, network or CPU variants.
  • Improved key management being able to select an already defined SSH public key (e.g. with another instance)

Only offering MySQL 5.7 is an adoption impediment. This requires any organization with applications that are not greenfield to have already migrated to MySQL 5.7. I can understand the long-term rationale view here, but I see it as a clear limitation for more rapid adoption.

The details

The MySQL Cloud Service takes the hard parts out of managing MySQL. This is deployed in the Oracle Public Cloud, leveraging the fault-tolerant regional deployments in place. This Database as a Service (PaaS) helps to remove those annoying pieces of administration including backups, patches, monitoring etc. Powered by MySQL 5.7 Enterprise edition (the only cloud provider to offer this), the cloud system version in use is identical to the downloadable on-premise version. The Cloud service offers an initially optimized MySQL configuration of my.cnf to begin with, i.e. improvements on 5.7 defaults, and has variety of compute workload sizes to choose from. Storage is a ZFS appliance, but there is no information on provisioned IOPS for intensive workloads. You can use the web interface or REST API endpoints to create, deploy and manage your cloud instances. The REST API endpoints were not demonstrated in this session.

The predefined disk layout for storage is a very sane configuration. The Operating System (Oracle Unbreakable Linux 6 ) has a dedicated partition, (not part of sizing). There is a dedicated and throughput optimized ZFS LUN for data (what you size with the setup), a dedicated and latency optimized ZFS LUN for binary and InnoDB logs (which appears not initially sizable at present) and a dedicated ZFS LUN for backups. There is also a secondary backup storage capacity by default in Cloud Storage.

The UI interface provides the capability to configure a MEM server and a MEM client. To conserve presentation time Morgan consolidated these into his initial demo instance. I feel there is room here to optimize the initial setup and to separate out the “management” server capabilities, e.g. selecting your MEM configuration, and by default offering just the MEM client authentication (if MEM server is configured). For users not familiar with MySQL Enterprise features separating the definition and management in the initial creation stage is an optimization to remove complexity. There may even be an option for a getting started quick setup step that can provision your MEM setup in a dedicated instance when there is none detected or with a new account. Here is the flip side. An inexperienced user starting out may launch a MEM server with several test instances because the initial UI setup offers these as input fields, this is not the goal when managing multiple servers. The current version of MEM shown was 3.2, with 3.3 planned. Version 3.3. includes it’s own web interface for backup management.

Some things that are not in the initial release but I’m sure are on the roadmap. One is an upsize and downsize optimization. It would appear via the demo, that when a compute size modification occurs, the existing MySQL instance is shutdown and the VM is shutdown. A new VM is provisioned using the setup and disk partitions of the prior VM. An optimization is to provision a new VM, startup MySQL, then stop MySQL on new, stop on old, unmount on old, mount on new, and start MySQL. This removes the downtime in the VM provisioning step. Ideally I’d like to see the capability to perform this on a slave, and promote a slave more seamlessly. Practically however, this has many more moving pieces than in theory and so the future use MySQL router is a solution. The upcoming MySQL InnoDB cluster will also suffer from the complexity of resizing and uptime availability, especially when nodes are of varying compute sizes. As mentioned, I would like to see pre defined MySQL configurations. I would also like the option to pre-create multiple user authentications for instances, rather than having to specific one each time. I can see for a class of servers, e.g. a load test environment of a master/slave setup, and an application with several MySQL accounts, a means of bulk user and permission management.

Under the Hood Morgan talked about the InnoDB IO configuration optimizations, the number of IO Threads, use of O_DIRECT, the redo log size and buffer pool optimized to compute shape. The thread pool is enabled by default. The same considerations are in place for the operating system, Oracle Linux 6 UEK, MySQL task priority, memlock, and ext4 filesystem.

Again, those unfamiliar with MySQL Enterprise features will need greater help and UI help understanding the features, capabilities and configuration of Firewall, Encryption, Authentication, Audit, Monitor, Backup and Thread Pool.

The SSH access is what gives this first release control to be flexible. You can modify the MySQL configuration, incorporate configuration management processes. You can utilize on system database restore capabilities. You can monitor physical resource utilizations. I am unsure of the total control of changing (or breaking the system and the kernel).

There was a lot to digest in the 45 minute practical demonstration session. I am sure as with more reading and evaluation there will be more to share. As the roadmap for MySQL InnoDB cluster develops I can see there being a good cadence on new features and functionality released for the MySQL Cloud Service.

My Live Tweets (as the presentation was happening)

Oracle MySQL Public Cloud landing page

MySQL Operations in Docker at Oracle Open World 2016

One of the Monday tutorials at Oracle Open World was MySQL Operations in Docker. A 2 hour tutorial by Giuseppe Maxia. This tutorial showed what you can do with MySQL on Docker which is specifically good for testing. Some key points from the tutorial included:

  • Differences between containers and Virtual Machines (VM)
    • VM are mutable architecture, you start then modify
    • containers are an immutable architecture
  • Containers are not micro-services
  • Understanding about the “official” MySQL docker image. (Hint: Use mysql/mysql-server, not mysql)
  • The issues of specifying a required password to install MySQL on a container
  • Understanding how to use volumes, for a file (e.g. /etc/my.cnf), or a directory (e.g. /var/lib/mysql)
  • How to produce a more secure MySQL installation using files
  • How to get MySQL 5.0,5.1,5.5,5.6,5.7 and 8.0 on #CentOS, #Ubuntu and #debian for #docker using his own minimal MySQL docker images. (NOTE: MySQL images by Oracle, Percona and MariaDB are only the current version)
  • MySQL Group replication demo (mysql/mysql-gr)

Giuseppe performed his demos on a dedicated Linux machine. My attempts to reproduce the tutorial steps on Mac failed, as mentioned because of issues with volumes. MySQL Group Replication with Docker on Mac is also unpredictable.

In summary, Giuseppe talked about how wonderful Docker is for development and testing but not advisable for production. Some of the questions regarding production concerns included the inability working with orchestrators, stability with volumes and overall container user security. In addition, a tough audience question “How do you upgrade MySQL in production using containers?” highlighted that this technology is evolving, and while becoming ideal for stateless applications, it is not ready for primetime for databases that require state to operate.

Get the Code Examples on GitHub.

Docker has become a popular technology for containers starting in 2013. It did not invent containers, A Brief History of Container Technology gives a timeline of technologies that have got us to where we are today.

MySQL Keynote at Oracle Open World 2016

Tomas Ulin made a number of key announcements at this year’s State of the Dolphin and Customer Experiences keynote. MySQL Public Cloud, MySQL 8.0 DMR, MySQL InnoDB Cluster, MySQL Group Replication (RC). Some tweets and points of the keynote:

There were also user stories by Nicolai Plum – Senior Systems Architect at Booking.com and Andrew Archibald – VP of Development at Churchill Downs.

Nicolai talked about how booking has evolved over the years starting with the traditional MySQL replication model, moving to a more complex sharded and partitioned architecture, and now a re-architecture towards a loosely coupled, write optimized and read optimized data model leveraging Redis queues. This work has enabled services to hide the complexity and need for developers to write SQL and leverage better data translation and interoperability, e.g. MySQL to Hadoop to MySQL. Booking.com is actively using MySQL 5.7, and is evaluating how to incorporate the new features of MySQL 8.0.

Andrew talked about how twinspires.com uses a multi data center master-master replication setup with MySQL 5.6 to manage critical availability needs for online wagering of horse races around the world. During peak times, load can increase 100x, similar to my own experiences handling 100x flash sales Improving performance – A full stack problem.

I have yet to discover how to deep link to this presentation at the OOW 2016 agenda to enable viewers to read the overview and speaker profiles.

Q: Does MySQL support ACID? A: Yes

I was recently asked this question by an experienced academic at the NY Oracle Users Group event I presented at.

Does MySQL support ACID? (ACID is a set of properties essential for a relational database to perform transactions, i.e. a discrete unit of work.)

Yes, MySQL fully supports ACID, that is Atomicity, Consistency, Isolation and Duration. (*)

This is contrary to the first Google response found searching this question which for reference states “The standard table handler for MySQL is not ACID compliant because it doesn’t support consistency, isolation, or durability”.

The question is however not a simple Yes/No because it depends on timing within the MySQL product’s lifecycle and the version/configuration used in deployment. What is also *painfully* necessary is to understand why this question would even be asked of the most popular open source relational database.

MySQL has a unique characteristic of supporting multiple storage engines. These engines enabling varying ways of storing and retrieving data via the SQL interface in MySQL and have varying features for supporting transactions, locking, index strategies, compression etc. The problem is that the default storage engine from version 3.23 (1999) to 5.1 (2010) was MyISAM, a non-transactional engine, and hence the first point of confusion.

The InnoDB storage engine has been included and supported from MySQL 3.23. This is a transactional engine supporting ACID properties. However, not all of the default settings in the various MySQL versions have fully meet all ACID needs, specifically the durability of data. This is the second point of confusion. Overtime other transactional storage engines in MySQL have come and gone. InnoDB has been there since the start so there is no excuse to not write applications to fully support transactions. The custodianship of Oracle Corporation starting in 2010 quickly corrected this *flaw* by ensuring the default storage engine in MySQL 5.5 is InnoDB. But the damage to the ecosystem that uses MySQL, that is many thousands of open source projects, and the resources that work with MySQL has been done. Recently working on a MySQL 5.5 production system in 2016, the default engine was specifically defined in the configuration defined as MyISAM, and some (but not all tables) were defined using MyISAM. This is a further conversation as to why, is this a upgrade problem? Are there legacy dependencies with applications? Are the decision makers and developers simply not aware of the configuration? Or, are developers simply not comfortable with transactions?

Like other anti-reasonable MySQL defaults the unaware administrator or developer could consider MySQL as supporting ACID properties, however until detailed testing with concurrency and error conditions not realize the impact of poor configuration settings.

The damage of having a non-transactional storage engine as the default for over a decade has created a generation of professionals and applications that abuses one of the primary usages of a relational database, that is a transaction, i.e. to product a unit for work that is all or nothing. Popular open source projects such as WordPress, Drupal and hundreds more have for a long time not supported transactions or used InnoDB. Mediawiki was at least one popular open source project that was proactive towards InnoDB and transaction usage. The millions of plugins, products and startups that build on these technologies have the same flaws.

Further confusion arises when an application uses InnoDB tables but does not use transactions, or the application abuses transactions, for example 3 different transactions that should really be 1.

While newer versions of MySQL 5.6 and 5.7 improve default configurations, until these versions a more commonly implemented non-transactional use in a relational database will remain. A recent Effective MySQL NYC Meetup survey showed that installations of version 5.0 still exist, and that few have a policy for a regular upgrade cadence.

Do you control your database outages?

Working with a client last week I noted in my analysis, “The mysql server was restarted on Thursday and so the [updated] my.cnf settings seems current”. This occurred between starting my analysis on Wednesday and delivering my findings on Friday.

# more /var/lib/mysql/ip-104-238-102-213.secureserver.net.err
160609 17:04:43 [Note] /usr/sbin/mysqld: Normal shutdown

The client however stated they did not restart MySQL and would not do that at 5pm which is still a high usage time of the production system. This is unfortunately not an uncommon finding, that a production system had an outage and that the client did not know about it and did not instigate this.

There are several common causes and the “DevOps” mindset for current production systems has made this worse.

  • You have managed hosting and they perform software updates with/without your knowledge. I have for example worked with several Rackspace customers and there would be an outage because Rackspace engineers decided to apply an upgrade at a time that suited them, not their customers.
  • You have chosen automate updates for your Operating System.
  • Your developers update the software when they like.
  • You are using a 3rd party product that is making an arbitrary decision.

In this case the breadcrumbs lead to the last option, that cPanel is performing this operation as hinted by the cPanel specific installed MySQL binaries.

$ rpm -qa | grep -i mysql
cpanel-perl-522-MySQL-Diff-0.43-1.cp1156.x86_64
MySQL55-devel-5.5.50-1.cp1156.x86_64
MySQL55-client-5.5.50-1.cp1156.x86_64
cpanel-perl-522-DBD-mysql-4.033-1.cp1156.x86_64
compat-MySQL50-shared-5.0.96-4.cp1136.x86_64
MySQL55-server-5.5.50-1.cp1156.x86_64
cpanel-mysql-libs-5.1.73-1.cp1156.x86_64
MySQL55-shared-5.5.50-1.cp1156.x86_64
MySQL55-test-5.5.50-1.cp1156.x86_64
compat-MySQL51-shared-5.1.73-1.cp1150.x86_64
cpanel-mysql-5.1.73-1.cp1156.x86_64

Also note that cPanel still uses MySQL 5.1 shared libraries.

So why did cPanel perform not one shutdown, but two in immediate succession? The first was 17 seconds, the second was 2 seconds. Not being experienced with cPanel I cannot offer an answer for this shutdown occurance. I can for others which I will detail later.

160609 17:04:24 [Note] /usr/sbin/mysqld: Normal shutdown
...
160609 17:04:28 [Note] /usr/sbin/mysqld: Shutdown complete
...
160609 17:04:41 [Note] /usr/sbin/mysqld: ready for connections.
...
160609 17:04:43 [Note] /usr/sbin/mysqld: Normal shutdown
...
160609 17:04:45 [Note] /usr/sbin/mysqld: ready for connections.
...

And why did the customer not know about the outage? If you use popular SaaS monitoring solutions such as New Relic and Pingdom you would not have been informed because these products have a sampling time of 60 seconds. I use these products along with Nagios on my personal blog site as they provide adequate instrumentation based on the frequency of usage. I would not recommend these tools as the only tools used in a high volume production system simply because of this one reason. In high volume system you need sampling are much finer granularity.

So just when you were going to justify that 17 seconds while unexpected is tolerable, I want to point out that this subsequently occurred and the outage was over 4 minutes.

160619 11:58:07 [Note] /usr/sbin/mysqld: Normal shutdown
...
160619 12:02:26 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
...

An analysis of the MySQL error log which is correctly not rolled as I always recommend showed a pattern of regular MySQL updates, from 5.5.37 thru 5.5.50. This is the most likely reason a 3rd party product has performed a database outage, to perform a software update at their choosing, not yours.

150316  3:54:11 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.37-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

150316  3:54:22 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.37-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

150316 19:07:31 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.37-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

150317  2:05:45 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.40-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

150317  2:05:54 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.40-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

150319  1:17:26 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.42-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

150319  1:17:34 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.42-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

150616  1:39:44 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.42-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

150616  1:39:52 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.42-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

151006  1:01:45 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.45-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

151006  1:01:54 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.45-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

151027  1:21:12 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.46-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

160105  1:31:35 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.47-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

160211  1:52:47 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.48-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

160211  1:52:55 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.48-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

160503  1:14:59 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.49-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

160503  1:15:03 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.49-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

160521 18:46:24 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.49-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

160522 11:51:45 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.49-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

160529 15:26:41 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.49-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

160529 15:30:12 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.49-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

160604 23:29:15 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.49-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

160609 17:04:41 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.50-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

160609 17:04:45 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.50-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

160619 12:21:58 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.50-cll'  socket: '/var/lib/mysql/mysql.sock'  port: 0  MySQL Community Server (GPL)

What is intriguing from this analysis is that several versions were skipped including .38, .39, .41, .43, .44. One may ask the question why?

For Clients

This leads to several questions of the strategy used in your organization for controlling outages of your MySQL infrastructure for upgrades or for other reasons.

  • What is an acceptable outage time?
  • What is the acceptable maintenance window to perform outages?
  • What is your release cadence for MySQL upgrades?
  • Who or what performs updates?
  • Can your monitoring detect small outages?

You should also consider in your business strategy having a highly available (HA) MySQL infrastructure to avoid any outage, or application intelligence to support varying levels of data access as I describe in Successful MySQL Scalability Principles.

Understanding the MySQL Release Cadence

At the recent New York Oracle Users Group summer general meeting I gave a presentation to the Oracle community on the MySQL product release cycle. Details included:

  • Identifying the server product options covering community, enterprise and ecosystem.
  • Describe MySQL enterprise products, features and support options.
  • Describing the DMR, RC, GA, EOL and labs product lifecycle.
  • Discussing the GA release frequency.
  • Talking about the MySQL Upgrade path.

Utilizing OpenStack Trove DBaaS for deployment management

Trove is used for self service provisioning and lifecycle management for relational and non-relational databases in an OpenStack cloud. Trove provides a RESTful API interface that is same regardless of the type of database. CLI tools and a web UI via Horizon are also provided wrapping Trove API requests.

In simple terms. You are a MySQL shop. You run a replication environment with daily backups and failover capabilities which you test and verify regularly. You have defined DBA and user credentials ACL’s across dev, test and prod environments. Now there is a request for using MongoDB or Cassandra, the engineering department has not decided but they want to evaluate the capabilities. How long as a operator does it take to acquire the software, install, configure, setup replication, backups, ACLs and enable the engineering department to evaluate the products?

With Trove DBaaS this complexity is eliminated due to a consistent interface to perform the provisioning, configuration, HA, B&R, ACL across other products the exact same way you perform these tasks for MySQL. This enables operations to be very proactive to changing technology requests supporting digital transformation strategies.

Enabling this capability is not an automatic approval of a new technology stack. It is important that strategic planning, support and management is included in the business strategy to understanding the DBaaS capability for your organization. Examples of operations due diligence would include how to integrate these products into your monitoring, logging and alerting systems. Determine what additional disk storage requirements may be needed. Test, verify and time recovery strategies.

Trove specifically leverages several other OpenStack services for source image and instance management. Each trove guest image includes a base operating system, the applicable database software and a database technology specific trove guest agent. This agent is the intelligence that knows the specific syntax and version needs to perform the tasks. The agent is also the communication mechanism between Trove and the running nova instance.

Trove is a total solution manager for the instance running your chosen database. Instances have no ssh, telnet or other general access. The only access is via the SQL communication via the defined ports, e.g. 3306 for MySQL.

The Trove lifecycle management covers the provisioning, management, security, configuration and tuning of your database. Amrith Kumar in a recent presentation at the NYC Postgres meetup provides a good description of the specifics.

Trove is capable of describing and supporting clustering and replication topologies for the various data stores. It can support backup and restore, failover and resizing of clusters without the operator needing to know the specific syntax of complexities of a database product they are unfamiliar with.

A great example is the subtle difference in MySQL replication management using GTID’s between MySQL and MariaDb. To the developer, the interaction between MySQL and MariaDB via SQL is the same, the management of a replication topology is not identical, but is managed by the Trove guest agent. To the operator, the management is the same.

Also in his presentation, Kumar described Tesora, an enterprise class Trove service provided with a number of important additional features. Tesora supports additional database products including Oracle and DB2Express as well as commercial versions for Oracle MySQL, EnterpriseDB, Couchbase, Datastax, and mongoDB. Using the Horizon UI customizations with pre-defined trove instances greatly reduces the work needed for operators and deployers to build there own.