While working for a client, I had need to produce canned results of certain different criteria, recording the result in a table for later usage, and keep the position within each result.
Knowing no way to do this via a single INSERT INTO … SELECT statement, I reverted to using a MySQL Stored Procedure. For example, using a sample I_S query and the following snippet:
... DECLARE list CURSOR FOR SELECT select table_name from information_schema.tables where table_schema='INFORMATION_SCHEMA'; DECLARE CONTINUE HANDLER FOR SQLSTATE '02000' SET done=TRUE; OPEN list; SET result_position = 1; SET done = FALSE; lab: LOOP FETCH list INTO table_name; IF done THEN CLOSE list; LEAVE lab; END IF; INSERT INTO summary_table(val,pos) VALUES (table_name,result_position); SET result_position = result_position + 1; END LOOP;
However, in reviewing with another colleague after writing some 10+ different queries and SP loops, I realized that it is possible to record the position of each row in a result set using session variables, negating the need for all that code.
SET @rowcount = 0; SELECT table_name, @rowcount := @rowcount + 1 FROM information_schema.tables WHERE table_schema = 'INFORMATION_SCHEMA'; +---------------------------------------+----------------------------+ | table_name | @rowcount := @rowcount + 1 | +---------------------------------------+----------------------------+ | CHARACTER_SETS | 1 | | COLLATIONS | 2 | | COLLATION_CHARACTER_SET_APPLICABILITY | 3 | | COLUMNS | 4 | | COLUMN_PRIVILEGES | 5 | | ENGINES | 6 | | EVENTS | 7 | | FILES | 8 | | GLOBAL_STATUS | 9 | | GLOBAL_VARIABLES | 10 | | KEY_COLUMN_USAGE | 11 | | PARTITIONS | 12 | | PLUGINS | 13 | | PROCESSLIST | 14 | | PROFILING | 15 | | REFERENTIAL_CONSTRAINTS | 16 | | ROUTINES | 17 | | SCHEMATA | 18 | | SCHEMA_PRIVILEGES | 19 | | SESSION_STATUS | 20 | | SESSION_VARIABLES | 21 | | STATISTICS | 22 | | TABLES | 23 | | TABLE_CONSTRAINTS | 24 | | TABLE_PRIVILEGES | 25 | | TRIGGERS | 26 | | USER_PRIVILEGES | 27 | | VIEWS | 28 | +---------------------------------------+----------------------------+ 28 rows in set (0.01 sec)
Of course you need the all important SET before each query, if not specified however, the subsequent query does not result in an error, just NULL.
So all I needed was:
INSERT INTO summary_table(val,pos) SELECT table_name, @rowcount := @rowcount + 1 FROM information_schema.tables WHERE table_schema = 'INFORMATION_SCHEMA';
A simple and trivial solution.
DISCLAIMER:
How this performs under load, and how it is supported in different and future versions of MySQL is not determined.
Arjen Lentz says
AFAIK, it’s worked in every version since 3.23
Rob Wultsch says
Worth minor note:
1. Persistent connections can have undesired behavior with udv’s. YMMV.
2. “User variable names are case sensitive before MySQL 5.0 and not case sensitive in MySQL 5.0 and up.” (correctly me if I’m wrong on this, but it has been my experience…)
UDV’s are very cool. I just wish the MySQL optimizer was smart enough to know when to quit reading rows when a UDV has gone into a state in the where clause such that no more rows would be returned.
Joe Izenman says
Doesn’t apply directly to this, and I honestly haven’t yet thought of a situation where someone would need to do this, but still worth noting as a quirk of UDVs: If you have a UDV incrementing once per row, and you have an aggregate function in the mix, and you are grouping by the UDV (like I said, not really something you’re likely to see, but an interesting quirk), the UDV will increment by 2 (or double whatever you are incrementing by)
mysql> SET @blah = 0;
Query OK, 0 rows affected (0.00 sec)
mysql> SELECT (@blah := @blah+1) AS blah, AVG(value)
-> FROM randomNumbers
-> GROUP BY blah
-> LIMIT 10;
+——+————+
| blah | AVG(value) |
+——+————+
| 2 | 440.0000 |
| 4 | 500.0000 |
| 6 | 626.0000 |
| 8 | 589.0000 |
| 10 | 606.0000 |
| 12 | 599.0000 |
| 14 | 687.0000 |
| 16 | 647.0000 |
| 18 | 451.0000 |
| 20 | 463.0000 |
+——+————+
10 rows in set (0.18 sec)
Rob Wultsch says
Joe, you should try putting the assignment in the where:
http://www.xaprb.com/blog/2006/12/15/advanced-mysql-user-variable-techniques/
My guess is that MySQL pulls the results into temp table before the grouping (and therefore increments the var), and then when it groups/outputs it increments it again.
mysql> SELECT @blah as ‘blah’, AVG(i)
-> FROM integers
-> WHERE @blah := IFNULL(@blah +1,0)
-> GROUP BY blah
-> LIMIT 10;
+——+——–+
| blah | AVG(i) |
+——+——–+
| 1 | 2.0000 |
| 2 | 3.0000 |
| 3 | 4.0000 |
| 4 | 5.0000 |
| 5 | 6.0000 |
| 6 | 7.0000 |
| 7 | 8.0000 |
| 8 | 9.0000 |
| 9 | 0.0000 |
+——+——–+
9 rows in set (0.00 sec)
Not that in the assignment you can do some sanity checking (such as for null).
Rob Wultsch says
Not -> ‘NOTE’
(note to self, reread posts…)