The Falcon!

Some early notes by Brian Aker on Falcon as discussed at the MySQL Camp .

Falcon is a transactional engine MySQL will be introducing. The first discussions were held about 3 years ago with Ann Harrison and about 1 1/2 years ago, MySQL started taking seriously the possibilities.

Falcon is not an InnoDB replacement. It’s a different way of looking at the problem of how it looks at and manages transactions, and how it’s designed. It flips around the way data is stored. Some points:

  • It uses as much memory as possible, like Oracle SGA or InnoDB pool.
  • It has a row cache not a page cache for more optimal memory use.
  • No locking at all. Jim doesn’t believe in it for concurrency control. It has total versioning.
  • Falcon has to keep all changes in memory, so not great for user transactions that may take longer
  • Characteristics – Well optimised for short fast web transactions, Designed for environments with lots of memory.

In general discussions is was mentioned from the floor the fear that there will be so many storage engine options, and you will need a matrix for what is good for what.

In conclusion, Brian mentioned it will be alpha before the end of year.

Tagged with: General MySQL MySQL Camp 1 - 2006

Related Posts

Readyset QueryPilot Announcement

At the MySQL and Heatwave Summit 2025 today, Readyset announced a new data systems architecture pattern named Readyset QueryPilot . This architecture which can front a MySQL or PostgreSQL database infrastructure, combines the enterprise-grade ProxySQL and Readyset caching with intelligent query monitoring and routing to help support applications scale and produce more predictable results with varied workloads.

Read more

More CPUs or Newer CPUs

In a CPU-bound database workload, regardless of price, would you scale-up or scale-new? What if price was the driving factor, would you scale-up or scale-new? I am using as a baseline the first available AWS Graviton2 processor for RDS (r6g).

Read more

An Interesting Artifact with AWS RDS Aurora Storage

As part of using public datasets with my own Benchmarking Suite I wanted upsize a dataset for larger volume testing. I have always used the INFORMATION_SCHEMA.TABLES data_length and index_length columns as a sufficiently accurate measurement for actual disk space used.

Read more